CN115861683A - Rapid dimensionality reduction method for hyperspectral image - Google Patents

Rapid dimensionality reduction method for hyperspectral image Download PDF

Info

Publication number
CN115861683A
CN115861683A CN202211432621.8A CN202211432621A CN115861683A CN 115861683 A CN115861683 A CN 115861683A CN 202211432621 A CN202211432621 A CN 202211432621A CN 115861683 A CN115861683 A CN 115861683A
Authority
CN
China
Prior art keywords
matrix
hyperspectral image
nodes
node
hyperspectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211432621.8A
Other languages
Chinese (zh)
Other versions
CN115861683B (en
Inventor
苏远超
白晋颖
蒋梦莹
李朋飞
刘�英
杨军
郝希刘荣
刘乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN202211432621.8A priority Critical patent/CN115861683B/en
Publication of CN115861683A publication Critical patent/CN115861683A/en
Application granted granted Critical
Publication of CN115861683B publication Critical patent/CN115861683B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a rapid dimension reduction method for a hyperspectral image, which relates to the field of hyperspectral data processing and comprises the following steps: firstly, according to the adjacency, a hyperspectral image is converted into network structure data; then, acquiring local correlation characteristics of the hyperspectral image through a learnable iterative filter; establishing an undirected graph based on the local correlation characteristics of the mesh structure data and the hyperspectral image; finally, converging the similar vertexes in the undirected graph by using a manifold geometric aggregation mechanism to obtain a low-latitude hyperspectral image containing global correlation characteristics; the method not only reduces data dimensionality, but also keeps ground object correlation characteristics from local to global, reduces storage burden, can improve ground object classification accuracy, and meets the market demand that high-spectrum data is applied but limited by the storage burden.

Description

Rapid dimensionality reduction method for hyperspectral image
Technical Field
The invention relates to the field of hyperspectral data processing, in particular to a rapid dimension reduction method for a hyperspectral image.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The hyperspectral sensor can acquire space details and spectral signals of the land surface, and the hyperspectral remote sensing image can well define the land cover type; the higher spectral resolution can provide abundant spectral information to distinguish different ground features; however, the spectral band of the hyperspectral image also brings serious information redundancy, the operation and storage burden of a computer is increased, and the classification precision is easily influenced by the Housekeeping phenomenon when the data is used for classifying the ground objects; to alleviate the above problems, the dimension reduction of the hyperspectral data is required; the hyperspectral data usually comprises hundreds to thousands of wave bands, and although abundant spectral information can be provided, great inconvenience is brought to data storage; in addition, a large amount of redundant information is attached to a large number of wave bands, and the accuracy of hyperspectral image classification is adversely affected.
The spectral data dimension reduction technology is a data compression technology and a storage medium which can mine the bidirectional relevance between the ground feature space information and the spectral characteristics; the method is mainly applied to reducing redundant information of the hyperspectral data, reducing the storage burden of the data, preventing dimension disasters in the data processing process, reducing the requirement of using the hyperspectral data on the memory of a computer and improving the accuracy of ground object classification.
At present, common dimension reduction methods comprise supervised dimension reduction and unsupervised dimension reduction; wherein, prior knowledge is required to be provided in advance for supervision and dimension reduction, and certain limiting factors exist in actual use; although unsupervised dimension reduction has high automation degree and convenient use, the information loss caused by unsupervised dimension reduction is large, and the high-precision classification requirement is difficult to meet.
Disclosure of Invention
The invention aims to: although the existing dimension reduction method can reduce the data dimension and relieve the storage burden, the problem that the high-spectrum data loses large associated information from the aspects of space and spectrum and the accuracy of ground object classification by the high-spectrum data is seriously influenced is solved.
The technical scheme of the invention is as follows:
a rapid dimensionality reduction method for a hyperspectral image specifically comprises the following steps:
step S1: converting the hyperspectral images into mesh structure data according to the adjacency;
step S2: acquiring local correlation characteristics of the hyperspectral image through a learnable iterative filter; that is, in an undirected graph, a vertex aggregates the local properties of neighboring pels, and these vertices can provide the initial connection between pels; in order to establish a vertex, the invention develops a learnable iterative filter to aggregate the adjacent spatial characteristics of each node through iterative learning;
and step S3: establishing an undirected graph based on the local correlation characteristics of the mesh structure data and the hyperspectral image;
and step S4: and converging the similar vertexes in the undirected graph by using a manifold geometric aggregation mechanism to obtain the low-latitude hyperspectral image containing the global correlation characteristics.
Further, the step S1 includes:
converting the hyperspectral image into network structure data according to the adjacent relation between pixels in the hyperspectral image;
wherein the nodes in the mesh structure data correspond to the pixels in the hyperspectral image; i.e. the hyperspectral image can be seen as a cross-grid structured data, where one node corresponds to one pixel.
Further, in the step S1, all elements in the pixel are normalized to the range of [0,1] to ensure reasonable spectral reflectivity; i.e. all elements in the pixel are normalized to the range of 0,1 before running the learnable iterative filter to ensure reasonable spectral reflectivity.
Further, the step S2 includes:
setting a vector
Figure BDA0003945395280000021
Defining a node;
setting an initial node matrix
Figure BDA0003945395280000022
Is provided with (alpha) i ,β i ) Representing nodes
Figure BDA0003945395280000031
η is the width of the neighborhood window; it should be noted that: η is a hyper-parameter set to an odd number;
node to be connected
Figure BDA0003945395280000032
Considered as the center of the neighborhood window, its neighborhood->
Figure BDA0003945395280000033
Is defined as:
Figure BDA0003945395280000034
wherein: alpha and beta are defined as the coordinate range of the neighborhood nodes, and alpha belongs to alpha i -θ,α i +θ],β∈[β i -θ,β i +θ]And θ = (η -1)/2;
it should be noted that: for a filter, a large window can collect many heterogeneous pixels, while a small window can significantly increase the computational burden; relative to the central node
Figure BDA0003945395280000035
The similarity of the nodes in the window is of primary concern because neighboring pels are more likely to be of the same material than more distant pels; therefore, the similarity between each node and other nodes in the adjacent window is calculated by adopting a Gaussian kernel function; meanwhile, defining the similarity value of the nodes outside the window as 0; because the window has eta 2 Per pixel element, i.e. obtain eta 2 A similarity value, and the number of nodes outside the window is n-eta 2 And their similarity value is 0; the number of wave bands is usually much smaller than the number of pixels, i.e. h < n;
and determining nodes with local consistency according to the similarity among the nodes, thereby determining local correlation characteristics between pixels in the hyperspectral image.
Further, the calculating the similarity between each node and other nodes in the adjacent window by using the gaussian kernel function includes:
the ith node is expanded into a sparse vector by using the index of the pixel
Figure BDA0003945395280000036
It is written as:
Figure BDA0003945395280000037
wherein:
Figure BDA0003945395280000038
is a gaussian kernel function, gamma is a hyper-parameter determining the gaussian kernel;
sparse vectors
Figure BDA0003945395280000039
Represents node->
Figure BDA00039453952800000310
And all nodes, these vectors can form a sparse matrix->
Figure BDA00039453952800000311
Wherein: matrix->
Figure BDA00039453952800000312
Is a symmetric matrix and its diagonal elements are 1;
node matrix
Figure BDA0003945395280000041
By means of a sparse matrix->
Figure BDA0003945395280000042
Is updated.
Further, the node matrix
Figure BDA0003945395280000043
By means of a sparse matrix->
Figure BDA0003945395280000044
Is updated, including:
Figure BDA0003945395280000045
wherein: -/represents the division of the corresponding elements of the two matrices (i.e., "dot division"), 1 (n×h) Representing an all 1 matrix (i.e., all elements in the matrix are 1).
Further, the step S3 includes:
taking the nodes with local consistency as the vertexes of the undirected graph to complete the construction of the undirected graph; preferably, the center of each node in the window
Figure BDA0003945395280000046
Spatial information can be aggregated from adjacent pixels in a filtering manner; while a finite difference may be used to avoid the over-smoothing phenomenon, a first order difference may face a problem if->
Figure BDA0003945395280000047
All non-zero elements in (a) are very close to 1, then the variation may be small; in order to avoid the phenomenon, the invention uses a second-order difference to complete the task; upon iteration, the sparse matrix->
Figure BDA0003945395280000048
The change of the medium non-zero element can become dynamically stable; in terms of similarity, nodes with the same class have higher consistency, and thus nodes with local consistency can constitute vertices of an undirected graph.
In the invention, specifically, the objective function of the manifold geometry aggregation mechanism in step S4 is:
Figure BDA0003945395280000049
wherein:
Figure BDA00039453952800000410
representing the objective function, E = D -1/2 WD -1/2 Representing a regularized similarity matrix, D is a degree matrix of W, and a matrix B consists of eigenvectors corresponding to the first c maximum eigenvalues of a matrix E;
the target function obtaining process is as follows:
the initial objective function of the manifold geometry aggregation mechanism is:
Figure BDA00039453952800000411
the optimal solution of B can be obtained theoretically through the initial objective function;
let E = D -1/2 WD -1/2 To regularize the similarity matrix, D is a degree matrix of W,
Figure BDA00039453952800000412
converting the initial objective function into:
Figure BDA00039453952800000413
the linear kernel function is adopted to calculate the connection weight between each node, and the process can be represented as follows:
Figure BDA0003945395280000051
wherein: w is a connection matrix used to represent weights,
Figure BDA0003945395280000052
is->
Figure BDA0003945395280000053
Based on the switching matrix of (4), is greater than or equal to>
Figure BDA0003945395280000054
In this case, the feature decomposition of E is equivalent to pairwise>
Figure BDA00039453952800000520
Singular value decomposition is carried out>
Figure BDA0003945395280000055
Figure BDA0003945395280000056
Is a matrix of singular values with diagonal values that are non-negative and real, and->
Figure BDA0003945395280000057
And &>
Figure BDA0003945395280000058
Are two orthogonal matrices, UU T =1 (n×n) ,VV T =1 (h×h) ;/>
Then, E may be expressed as E = (U Σ V) T )(U∑V T ) T =UΩV T Where Ω is a diagonal matrix,
Figure BDA0003945395280000059
c maximum eigenvalues in Ω;
at this time, the undirected graph is cut, and the column vector of U is the feature vector of E;
then, the first c maximum eigenvalues in Ω are extracted, corresponding eigenvectors in U are extracted using the indices of these eigenvalues, and these eigenvectors are grouped into matrix B (i.e., B = [ U ], [ U ] 1 ,u 2 ,...,u c ]) At this time, the matrix B is the data after the preliminary dimension reduction.
Furthermore, the mechanism can further improve the operation efficiency;
in step S4, before calculating the connection weight between the nodes by using the linear kernel function, randomly selecting k nodes to construct an anchor point, and setting the value of k as: k < n;
then linear kernel function is adopted to obtain a link matrix from the node to the anchor point
Figure BDA00039453952800000510
Then F can be expressed as:
Figure BDA00039453952800000511
wherein:
Figure BDA00039453952800000512
representing a matrix of anchor points, each row vector representing an anchor node, i.e.>
Figure BDA00039453952800000513
Represents an anchor point, p =1, ·, n, q =1,. K), and/or>
Figure BDA00039453952800000514
By replacing in formula 6 with F
Figure BDA00039453952800000515
The expression for W is W = FFT;
by using
Figure BDA00039453952800000516
Replacement 7->
Figure BDA00039453952800000517
Is the transformation matrix for F;
to pair
Figure BDA00039453952800000518
Performs like->
Figure BDA00039453952800000519
Performing the same singular value decomposition to finally obtain a matrix B;
through the steps, the requirement of big data application is further met; the operation efficiency of the equipment is further improved, and the operation time is shortened; the consumption of the memory of the computer in the dimension reduction process is further reduced; the method has the advantages that efficiency is improved, memory consumption is reduced, and certain information loss is caused, so that the method is only used for processing large-scale hyperspectral data.
Further, in practical engineering applications, singular value decomposition may cause some abnormal values, and in order to enhance the robustness of the mechanism, in step S4, all elements in the matrix B are normalized to [0,1] by using the following formula;
Figure BDA0003945395280000061
wherein: b i Is the row vector of B, B = [ B = [) 1 ,...b n ] T ,b m Is b i M-th element of (1), b min And b max Are respectively b i Minimum and maximum values of;
at this time, b i The matrix B is dimension-reduced data which not only contains local correlation but also global correlation.
Compared with the prior art, the invention has the beneficial effects that:
1. a fast dimension reduction method for a hyperspectral image comprises the following steps: step S1: converting the hyperspectral images into mesh structure data according to the adjacency; step S2: acquiring local correlation characteristics of the hyperspectral image through a learnable iterative filter; and step S3: establishing an undirected graph based on the local correlation characteristics of the mesh structure data and the hyperspectral image; and step S4: converging the similar vertexes in the undirected graph by using a manifold geometric aggregation mechanism to obtain a low-latitude hyperspectral image containing global correlation characteristics; by the dimension reduction method, although the data dimension is reduced, the ground feature correlation characteristics are kept from local to global; the method has the advantages that data dimensionality is reduced, storage burden is reduced, accuracy of ground feature classification can be improved, and market requirements that hyperspectral data are applied but limited by the storage burden are met.
2. A fast dimension reduction method for a hyperspectral image adopts a full-connection mode to converge the correlation of all nodes, and finally, the correlation information of the original space and spectrum of a pixel is still stored after dimension reduction; the learnable iterative filter training of the invention does not relate to a deep network, and realizes the global relevance convergence by carrying out the characteristic decomposition on the undirected graph, so the operation speed is high, and the application range is wide; the invention is used for reducing the dimension of the hyperspectral image, prior knowledge is not required to be provided in advance, and the correlation among pixels can still be well stored; after the dimension reduction processing is carried out on the hyperspectral data, not only can the calculation burden of a subsequent classification task be reduced, but also the accuracy of ground object classification can be further improved.
Drawings
FIG. 1 is a block flow diagram of a fast dimension reduction method for hyperspectral images;
FIG. 2 is a schematic diagram of the technical architecture and implementation of a learnable iterative filter;
FIG. 3 is a schematic diagram of a procedure for implementing dimension reduction by the manifold geometry aggregation mechanism.
Detailed Description
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The features and properties of the present invention are described in further detail below with reference to examples.
Example one
The spectral data dimension reduction technology is a data compression technology and a storage medium which can mine the bidirectional relevance between the ground feature space information and the spectral characteristics; the method is mainly applied to reducing redundant information of the hyperspectral data, reducing the storage burden of the data, preventing dimension disasters in the data processing process, reducing the requirement of using the hyperspectral data on the memory of a computer and improving the accuracy of ground object classification.
At present, common dimension reduction methods comprise supervised dimension reduction and unsupervised dimension reduction; wherein, prior knowledge is required to be provided in advance for supervision and dimension reduction, and certain limiting factors exist in actual use; although unsupervised dimension reduction has high automation degree and convenient use, the information loss caused by unsupervised dimension reduction is large, and the high-precision classification requirement is difficult to meet.
In order to solve the above problems, the embodiment provides a fast dimension reduction method for a hyperspectral image, in which the hyperspectral image is regarded as an undirected full-connected graph, and relevance convergence is realized by cutting the undirected graph; meanwhile, a new ground-object space spectrum correlation convergence mechanism is generated based on a rapid dimension reduction method.
Particularly, from the two-way aspects of Euclidean distance and manifold geometry theory, correlation characteristics from local to global are aggregated.
Firstly, a novel learnable iterative filter is designed, local correlation characteristics between pixels can be acquired from Euclidean space in a self-adaptive manner, namely, vertexes of an undirected graph are optimized through the novel learnable iterative filter, for the filter, each node corresponds to one pixel and serves as a center to aggregate local correlation characteristics of adjacent pixels, nodes with the same attribute form one vertex, and all the nodes and the vertexes are updated through kernel-based learning iteration; with the updating of the nodes, the consistency between adjacent pixels is increased, and the similarity relation between the adjacent pixels is gradually stable; meanwhile, by utilizing a kernel-based learning method, the novel filter can adaptively calculate the aggregation weight between the central pixel and the adjacent pixel.
Then, acquiring global correlation characteristics between the pixels by using a manifold geometric aggregation mechanism provided by the embodiment; after the processing by the method, although the data dimension is reduced, the feature of land feature correlation is maintained from local to global, so that the data dimension is reduced, the storage burden is reduced, the accuracy of land feature classification can be improved, and the market demand that the hyperspectral data is applied but limited by the storage burden is met.
Referring to fig. 1-3, a method for fast dimension reduction of a hyperspectral image specifically includes:
step S1: converting the hyperspectral images into mesh structure data according to the adjacency;
step S2: acquiring local correlation characteristics of the hyperspectral image through a learnable iterative filter; that is, in an undirected graph, a vertex aggregates the local properties of neighboring pels, and these vertices can provide the initial connection between pels; in order to establish a vertex, the embodiment develops a learnable iterative filter to aggregate the adjacent spatial characteristics of each node through iterative learning;
and step S3: establishing an undirected graph based on the local correlation characteristics of the mesh structure data and the hyperspectral image;
and step S4: and converging the similar vertexes in the undirected graph by using a manifold geometric aggregation mechanism to obtain the low-latitude hyperspectral image containing the global correlation characteristic.
In this embodiment, specifically, the step S1 includes:
converting the hyperspectral image into network structure data according to the adjacent relation between pixels in the hyperspectral image;
wherein the nodes in the mesh structure data correspond to the pixels in the hyperspectral image; i.e. the hyperspectral image can be seen as a cross-grid structured data, where one node corresponds to one pixel.
In this embodiment, specifically, in step S1, all elements in the pixel are normalized to the range of [0,1] to ensure reasonable spectral reflectivity; that is, all elements in the pixel are normalized to the 0,1 range before running the learnable iterative filter to ensure reasonable spectral reflectivity.
In this embodiment, specifically, the step S2 includes:
setting a vector
Figure BDA0003945395280000091
Defining a node;
setting an initial node matrix
Figure BDA0003945395280000092
Is provided with (alpha) i ,β i ) Representing nodes
Figure BDA0003945395280000093
η is the width of the neighborhood window; it should be noted that: η is a hyper-parameter set to an odd number;
node to node
Figure BDA0003945395280000094
Considered as the center of the neighborhood window, its neighborhood->
Figure BDA0003945395280000095
Is defined as follows:
Figure BDA0003945395280000096
wherein: alpha and beta are defined as coordinate range of neighborhood nodes, and alpha belongs to [ alpha ] i -θ,α i +θ],β∈[β i -θ,β i +θ]And θ = (η -1)/2;
it should be noted that: for the filter, a large window can collect many heterogeneous pixels, and a small window can significantly increase the computational burden; relative to the central node
Figure BDA0003945395280000097
The similarity of the nodes in the window is mainly of concern because neighboring pels are more likely to belong to the same material than more distant pels; therefore, the similarity between each node and other nodes in the adjacent windows is calculated by adopting a Gaussian kernel function; meanwhile, defining the similarity value of the nodes outside the window as 0; because the window has eta 2 Individual pixel, i.e. obtain eta 2 A similarity value, and the number of nodes outside the window is n-eta 2 And their similarity value is 0; the number of wave bands is usually much smaller than the number of pixels, i.e. h < n;
and determining nodes with local consistency according to the similarity among the nodes, thereby determining local correlation characteristics between pixels in the hyperspectral image.
In this embodiment, specifically, the calculating the similarity between each node and other nodes in the adjacent window by using the gaussian kernel function includes:
the ith node is expanded into a sparse vector by using the index of the pixel
Figure BDA0003945395280000098
It is written as:
Figure BDA0003945395280000101
wherein:
Figure BDA0003945395280000102
is a gaussian kernel function, gamma is a hyper-parameter determining the gaussian kernel;
sparse vectors
Figure BDA0003945395280000103
Representing node>
Figure BDA0003945395280000104
And all nodes, these vectors can form a sparse matrix->
Figure BDA0003945395280000105
Wherein: matrix->
Figure BDA0003945395280000106
Is a symmetric matrix and its diagonal elements are 1;
node matrix
Figure BDA0003945395280000107
By means of a sparse matrix->
Figure BDA0003945395280000108
Is updated.
In this embodiment, specifically, the node matrix
Figure BDA0003945395280000109
By means of a sparse matrix->
Figure BDA00039453952800001010
Is updated, including:
Figure BDA00039453952800001011
wherein: a/denotes the division of the corresponding elements of the two matrices (i.e., "dot division"), 1 (n×h) Represents an all-1 matrix (i.e., theAll elements in the matrix are 1).
In this embodiment, specifically, the step S3 includes:
taking the nodes with local consistency as the vertexes of the undirected graph to complete the construction of the undirected graph; preferably, the center of each node in the window
Figure BDA00039453952800001012
Spatial information can be aggregated from adjacent pixels in a filtering manner; while a finite difference can be used to avoid the over-smoothing phenomenon, a first order difference can face a problem if { (R) } is greater than>
Figure BDA00039453952800001013
All non-zero elements in (a) are very close to 1, then the variation may be small; to avoid the above phenomenon, the present embodiment uses a second order difference to accomplish this task; upon iteration, the sparse matrix->
Figure BDA00039453952800001014
The change of the medium non-zero element can become dynamically stable; in terms of similarity, nodes with the same class have higher consistency, and thus nodes with local consistency can constitute vertices of an undirected graph.
In this embodiment, specifically, the objective function of the manifold geometry aggregation mechanism in step S4 is:
Figure BDA00039453952800001015
wherein:
Figure BDA00039453952800001016
representing the objective function, E = D -1/2 WD -1/2 Representing a regularized similarity matrix, D is a degree matrix of W, and a matrix B consists of eigenvectors corresponding to the first c maximum eigenvalues of a matrix E;
the target function obtaining process is as follows:
the initial objective function of the manifold geometry aggregation mechanism is:
Figure BDA0003945395280000111
the optimal solution of B can be obtained theoretically through the initial objective function;
let E = D -1/2 WD -1/2 To regularize the similarity matrix, D is a degree matrix of W,
Figure BDA0003945395280000112
converting the initial objective function into:
Figure BDA0003945395280000113
the linear kernel function is adopted to calculate the connection weight between each node, and the process can be represented as follows:
Figure BDA0003945395280000114
wherein: w is a connection matrix used to represent weights,
Figure BDA0003945395280000115
is->
Figure BDA0003945395280000116
The transformation matrix of (a) is,
Figure BDA0003945395280000117
in this case, the characteristic decomposition of E is equivalent to a pair->
Figure BDA0003945395280000118
Singular value decomposition is carried out>
Figure BDA0003945395280000119
Figure BDA00039453952800001110
Is a matrix of singular values whose diagonal is non-negative and real, and>
Figure BDA00039453952800001111
and &>
Figure BDA00039453952800001112
Are two orthogonal matrices, UU T =1 (n×n) ,VV T =1 (h×h)
Then, E can be expressed as E = (U Σ V) T )(U∑V T ) T =UΩV T Where Ω is a diagonal matrix,
Figure BDA00039453952800001113
c maximum eigenvalues in Ω;
at this time, the undirected graph completes cutting, and the column vector of U is the feature vector of E;
then, the first c maximum eigenvalues in Ω are extracted, corresponding eigenvectors in U are extracted using the indices of these eigenvalues, and these eigenvectors are grouped into matrix B (i.e., B = [ U ], [ U ] 1 ,u 2 ,...,u c ]) At this time, the matrix B is the data after the preliminary dimension reduction; preferably, the reduced dimension data can provide training samples for the classifier.
In this embodiment, specifically, the above mechanism may further improve the operation efficiency;
in step S4, before calculating the connection weight between the nodes by using the linear kernel function, randomly selecting k nodes to construct an anchor point, and setting the value of k as: k < n;
then linear kernel function is adopted to obtain a link matrix from the node to the anchor point
Figure BDA00039453952800001114
Then F can be expressed as:
Figure BDA00039453952800001115
wherein:
Figure BDA00039453952800001116
representing a matrix of anchor points, each row vector representing an anchor node, i.e.>
Figure BDA00039453952800001117
Represents an anchor point, p =1, ·, n, q =1,. K), and/or>
Figure BDA0003945395280000121
By replacing in formula 6 with F
Figure BDA0003945395280000122
The expression of W is W = FF T
By using
Figure BDA0003945395280000123
Replacement 7->
Figure BDA0003945395280000124
Is the transformation matrix for F;
to pair
Figure BDA0003945395280000125
Performs like->
Figure BDA0003945395280000126
Performing the same singular value decomposition to finally obtain a matrix B;
through the steps, the requirement of big data application is further met; the equipment operation efficiency is further improved, and the operation time is shortened; the consumption of the memory of the computer in the dimension reduction process is further reduced; the method has the advantages that efficiency is improved, memory consumption is reduced, and certain information loss is caused, so that the method is only used for processing large-scale hyperspectral data.
In this embodiment, specifically, in practical engineering applications, singular value decomposition may cause some abnormal values, and in order to enhance the robustness of the mechanism, in step S4, all elements in the matrix B are normalized to [0,1] by using the following formula;
Figure BDA0003945395280000127
wherein: b i Is the row vector of B, B = [ B = [) 1 ,...b n ] T ,b m Is b i M-th element of (1), b min And b max Are respectively b i Minimum and maximum values of;
at this time, b i The matrix B is the aggregation characteristic after normalization, and is dimensionality-reduced data which not only contains local correlation but also contains global correlation.
The above-mentioned embodiments only express the specific embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for those skilled in the art, without departing from the technical idea of the present application, several changes and modifications can be made, which are all within the protection scope of the present application.
The background section is provided to present the context of the invention in general, and work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present invention.

Claims (10)

1. A rapid dimension reduction method for a hyperspectral image is characterized by comprising the following steps:
step S1: converting the hyperspectral images into mesh structure data according to the adjacency;
step S2: acquiring local correlation characteristics of the hyperspectral image through a learnable iterative filter;
and step S3: establishing an undirected graph based on the local correlation characteristics of the mesh structure data and the hyperspectral image;
and step S4: and converging the similar vertexes in the undirected graph by using a manifold geometric aggregation mechanism to obtain the low-latitude hyperspectral image containing the global correlation characteristics.
2. The method for fast dimensionality reduction of the hyperspectral image according to claim 1, wherein the step S1 comprises:
converting the hyperspectral image into network structure data according to the adjacent relation between pixels in the hyperspectral image;
wherein the nodes in the mesh structure data correspond to the pixels in the hyperspectral image.
3. The method for fast dimensionality reduction of the hyperspectral image according to claim 2, wherein in the step S1, all elements in the pixel are normalized to the range of [0,1] to ensure reasonable spectral reflectivity.
4. The method for fast dimensionality reduction of the hyperspectral image according to claim 3, wherein the step S2 comprises:
setting a vector
Figure FDA0003945395270000011
Defining a node;
setting an initial node matrix
Figure FDA0003945395270000012
Figure FDA0003945395270000013
Is provided with (alpha) ii ) Representing nodes
Figure FDA0003945395270000014
η is the width of the neighborhood window;
node to be connected
Figure FDA0003945395270000015
Considered as the center of the neighborhood window, its neighborhood->
Figure FDA0003945395270000016
Is defined as:
Figure FDA0003945395270000017
wherein: alpha and beta are defined as the coordinate range of the neighborhood nodes, and alpha belongs to alpha i -θ,α i +θ],β∈[β i -θ,β i +θ]And θ = (η -1)/2;
calculating the similarity between each node and other nodes in the adjacent windows by adopting a Gaussian kernel function; meanwhile, defining the similarity value of the nodes outside the window as 0;
and determining nodes with local consistency according to the similarity among the nodes, thereby determining local correlation characteristics between pixels in the hyperspectral image.
5. The method for fast dimensionality reduction of the hyperspectral image according to claim 4 is characterized in that the computing of the similarity between each node and other nodes in adjacent windows by adopting the Gaussian kernel function comprises:
the ith node is expanded into a sparse vector by using the index of the pixel
Figure FDA0003945395270000021
It is written as:
Figure FDA0003945395270000022
wherein:
Figure FDA0003945395270000023
is highA gaussian kernel function, gamma being a hyper-parameter determining the gaussian kernel;
sparse vectors
Figure FDA0003945395270000024
Representing node>
Figure FDA0003945395270000025
And all nodes, these vectors can form a sparse matrix->
Figure FDA0003945395270000026
Wherein: matrix/device>
Figure FDA0003945395270000027
Is a symmetric matrix and its diagonal elements are 1;
node matrix
Figure FDA0003945395270000028
By means of a sparse matrix->
Figure FDA0003945395270000029
Is updated.
6. The method for fast dimensionality reduction of the hyperspectral image according to claim 5, wherein the node matrix is characterized in that
Figure FDA00039453952700000210
By means of a sparse matrix->
Figure FDA00039453952700000211
Is updated, including:
Figure FDA00039453952700000212
wherein: denotes the division of the corresponding elements of the two matrices, 1 (n×h) Representing an all 1 matrix.
7. The method for fast dimensionality reduction of the hyperspectral image according to claim 6, wherein the step S3 comprises:
and taking the nodes with local consistency as the vertexes of the undirected graph to finish the construction of the undirected graph.
8. The method according to claim 6, wherein the objective function of the manifold geometry aggregation mechanism in step S4 is:
Figure FDA00039453952700000213
wherein:
Figure FDA0003945395270000031
denotes the objective function, E = D -1/2 WD -1/2 Representing a regularized similarity matrix, D is a degree matrix of W, and a matrix B consists of eigenvectors corresponding to the first c maximum eigenvalues of a matrix E;
the linear kernel function is adopted to calculate the connection weight between each node, and the process can be represented as follows:
Figure FDA0003945395270000032
wherein: w is a connection matrix used to represent weights,
Figure FDA0003945395270000033
is->
Figure FDA0003945395270000034
Based on the switching matrix of (4), is greater than or equal to>
Figure FDA0003945395270000035
In this case, the characteristic decomposition of E is equivalent to a pair->
Figure FDA0003945395270000036
Singular value decomposition is carried out and the blood pressure is greater or less>
Figure FDA0003945395270000037
Figure FDA0003945395270000038
Is a matrix of singular values whose diagonal is non-negative and real, and>
Figure FDA0003945395270000039
and &>
Figure FDA00039453952700000310
Are two orthogonal matrices, UU T =1 (n×n) ,VV T =1 (h×j)
Then, E may be expressed as E = (U Σ V) T )(UΣV T ) T =UΩV T Where Ω is a diagonal matrix,
Figure FDA00039453952700000311
c maximum eigenvalues in Ω;
at this time, the undirected graph is cut, and the column vector of U is the feature vector of E;
then, extracting the first c maximum eigenvalues in Ω, extracting corresponding eigenvectors in U by using the indexes of the eigenvalues, and forming a matrix B by the eigenvectors, wherein the matrix B is data after preliminary dimension reduction.
9. The method according to claim 7, wherein in step S4, before the linear kernel function is used to calculate the connection weight between the nodes, k nodes are randomly selected to construct anchor points, and the value of k is set as: k < n;
then linear kernel function is adopted to obtain a link matrix from the node to the anchor point
Figure FDA00039453952700000312
Then F can be expressed as:
Figure FDA00039453952700000313
wherein:
Figure FDA00039453952700000314
representing a matrix of anchor points, each row vector representing an anchor node, i.e.>
Figure FDA00039453952700000315
Represents an anchor point, p =1, ·, n, q =1,. K), and/or>
Figure FDA00039453952700000316
/>
By replacing with F
Figure FDA00039453952700000317
The expression of W is W = FF T
By using
Figure FDA00039453952700000318
Replacement->
Figure FDA00039453952700000319
Figure FDA00039453952700000320
Is the transformation matrix for F;
to pair
Figure FDA00039453952700000321
Performs like->
Figure FDA00039453952700000322
And performing singular value decomposition to finally obtain a matrix B.
10. The method for fast reducing the dimension of the hyperspectral image according to claim 1 is characterized in that in the step S4, all elements in the matrix B are normalized to [0,1] by adopting the following formula;
Figure FDA0003945395270000041
wherein: b i Is the row vector of B, B = [ B = 1 ,...b n ] T ,b m Is b i M-th element of (1), b min And b max Are respectively b i Minimum and maximum values of (d).
CN202211432621.8A 2022-11-16 2022-11-16 Rapid dimension reduction method for hyperspectral image Active CN115861683B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211432621.8A CN115861683B (en) 2022-11-16 2022-11-16 Rapid dimension reduction method for hyperspectral image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211432621.8A CN115861683B (en) 2022-11-16 2022-11-16 Rapid dimension reduction method for hyperspectral image

Publications (2)

Publication Number Publication Date
CN115861683A true CN115861683A (en) 2023-03-28
CN115861683B CN115861683B (en) 2024-01-16

Family

ID=85663659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211432621.8A Active CN115861683B (en) 2022-11-16 2022-11-16 Rapid dimension reduction method for hyperspectral image

Country Status (1)

Country Link
CN (1) CN115861683B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181503A1 (en) * 2007-01-30 2008-07-31 Alon Schclar Diffusion bases methods for segmentation and clustering
CN106778885A (en) * 2016-12-26 2017-05-31 重庆大学 Hyperspectral image classification method based on local manifolds insertion
CN108520281A (en) * 2018-04-13 2018-09-11 上海海洋大学 A kind of semi-supervised dimension reduction method of high spectrum image kept based on overall situation and partial situation
CN110298414A (en) * 2019-07-09 2019-10-01 西安电子科技大学 Hyperspectral image classification method based on denoising combination dimensionality reduction and guiding filtering
CN111860612A (en) * 2020-06-29 2020-10-30 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method
CN112529865A (en) * 2020-12-08 2021-03-19 西安科技大学 Mixed pixel bilinear deep layer de-mixing method, system, application and storage medium
CN113920345A (en) * 2021-09-09 2022-01-11 中国地质大学(武汉) Hyperspectral image dimension reduction method based on clustering multi-manifold measure learning
WO2022178977A1 (en) * 2021-02-26 2022-09-01 西北工业大学 Unsupervised data dimensionality reduction method based on adaptive nearest neighbor graph embedding

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181503A1 (en) * 2007-01-30 2008-07-31 Alon Schclar Diffusion bases methods for segmentation and clustering
CN106778885A (en) * 2016-12-26 2017-05-31 重庆大学 Hyperspectral image classification method based on local manifolds insertion
CN108520281A (en) * 2018-04-13 2018-09-11 上海海洋大学 A kind of semi-supervised dimension reduction method of high spectrum image kept based on overall situation and partial situation
CN110298414A (en) * 2019-07-09 2019-10-01 西安电子科技大学 Hyperspectral image classification method based on denoising combination dimensionality reduction and guiding filtering
CN111860612A (en) * 2020-06-29 2020-10-30 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method
WO2022001159A1 (en) * 2020-06-29 2022-01-06 西南电子技术研究所(中国电子科技集团公司第十研究所) Latent low-rank projection learning based unsupervised feature extraction method for hyperspectral image
CN112529865A (en) * 2020-12-08 2021-03-19 西安科技大学 Mixed pixel bilinear deep layer de-mixing method, system, application and storage medium
WO2022178977A1 (en) * 2021-02-26 2022-09-01 西北工业大学 Unsupervised data dimensionality reduction method based on adaptive nearest neighbor graph embedding
CN113920345A (en) * 2021-09-09 2022-01-11 中国地质大学(武汉) Hyperspectral image dimension reduction method based on clustering multi-manifold measure learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YUANCHAO SU, ET AL.: "Graph-Cut-Based Node Embedding for Dimensionality Reduction and Classification of Hyperspectral Remote Sensing Images", IGARSS 2022 - 2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, pages 1720 - 1723 *
普晗晔;王斌;张立明;: "基于流形学习的新高光谱图像降维算法", 红外与激光工程, no. 01, pages 238 - 243 *
苏远超 等: "空间加权的孤立森林高光谱影像异常目标检测", 测绘科学, pages 92 - 98 *
魏峰;何明一;梅少辉;: "空间一致性邻域保留嵌入的高光谱数据特征提取", 红外与激光工程, no. 05, pages 143 - 148 *

Also Published As

Publication number Publication date
CN115861683B (en) 2024-01-16

Similar Documents

Publication Publication Date Title
Yuan et al. Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion
WO2022178977A1 (en) Unsupervised data dimensionality reduction method based on adaptive nearest neighbor graph embedding
Lai et al. Robust subspace recovery layer for unsupervised anomaly detection
WO2020056791A1 (en) Method and apparatus for super-resolution reconstruction of multi-scale dilated convolution neural network
US10885379B2 (en) Multi-view image clustering techniques using binary compression
CN110889015B (en) Independent decoupling convolutional neural network characterization method for graph data
Zhou et al. Point cloud denoising review: from classical to deep learning-based approaches
Zhu et al. Non-convex regularized self-representation for unsupervised feature selection
CN109657611B (en) Adaptive image regularization non-negative matrix decomposition method for face recognition
Klimenta et al. Graph drawing by classical multidimensional scaling: new perspectives
Wang et al. Graph regularized spatial–spectral subspace clustering for hyperspectral band selection
Chen et al. Nonnegative tensor completion via low-rank Tucker decomposition: model and algorithm
Mishne et al. Co-manifold learning with missing data
Chen et al. Deep subspace image clustering network with self-expression and self-supervision
Ma et al. Spectral correlation-based diverse band selection for hyperspectral image classification
CN111680731B (en) Polarized SAR image supervision and classification method based on geometric perception discriminant dictionary learning
US20220137930A1 (en) Time series alignment using multiscale manifold learning
CN111310807B (en) Feature subspace and affinity matrix joint learning method based on heterogeneous feature joint self-expression
Zhang et al. A spectral clustering based method for hyperspectral urban image
CN115861683A (en) Rapid dimensionality reduction method for hyperspectral image
Özdemir et al. Multiscale tensor decomposition
CN115392350A (en) Incomplete multi-view clustering method and system based on co-regularization spectral clustering
CN112926658B (en) Image clustering method and device based on two-dimensional data embedding and adjacent topological graph
CN115272696A (en) Point cloud semantic segmentation method based on self-adaptive convolution and local geometric information
Dai et al. Weighted nonnegative matrix factorization for image inpainting and clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant