CN113989528B - Hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation - Google Patents
Hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation Download PDFInfo
- Publication number
- CN113989528B CN113989528B CN202111492113.4A CN202111492113A CN113989528B CN 113989528 B CN113989528 B CN 113989528B CN 202111492113 A CN202111492113 A CN 202111492113A CN 113989528 B CN113989528 B CN 113989528B
- Authority
- CN
- China
- Prior art keywords
- representation
- collaborative
- sparse
- hyperspectral image
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000013507 mapping Methods 0.000 claims abstract description 7
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 11
- 230000004913 activation Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 230000002195 synergetic effect Effects 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000005065 mining Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 abstract description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation, which firstly considers that each pixel of a hyperspectral image contains nonlinear information, redundancy exists among spectrums of the hyperspectral image, and the hyperspectral image has strong correlation among similar pixel samples; secondly, a hyperspectral image characteristic representation method of depth joint sparse-collaborative representation is provided, the significant information in a pixel sample and the correlation information between samples can be represented at the same time, and the deep nonlinear characteristic of a hyperspectral image can be extracted; and finally, designing an alternate iterative algorithm to solve the hyperspectral image characteristic representation method of the depth joint sparse-collaborative representation to obtain a hyperspectral image characteristic representation form. According to the invention, the depth network is adopted to carry out nonlinear mapping, and the redundancy, nonlinearity and correlation among samples of the pixel samples are fully considered by combining sparse representation and collaborative representation, so that more discriminative characteristics are extracted.
Description
Technical Field
The invention relates to the technical field of hyperspectral image processing, in particular to a hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation.
Background
The hyperspectral image (hyperspectral image) characteristic representation is one of the basic problems in the hyperspectral image processing field, is also a key technology in the fields of remote sensing science and computer science, and mainly utilizes a computer to process the hyperspectral image, extract or select the characteristic with discriminant information, thereby providing basis for relevant decisions (such as classification and identification). Because the hyperspectral remote sensing image has the characteristics of large data volume, high redundancy degree, complex spatial spectrum structure and the like, difficulty is brought to feature representation, and a plurality of opportunities and ideas are brought to algorithm design.
The relevant literature indicates that an excellent method of characterizing hyperspectral images should be able to preserve and describe as much information as possible in hyperspectral images. For hyperspectral images, the hyperspectral images contain abundant information, not only a large amount of information is contained in each pixel vector, but also the pixel vectors have abundant correlation information, especially among similar pixels. This is what it is intended to design hyperspectral image feature representation methods if such information can be efficiently and accurately described and utilized. However, most of the existing methods, such as the sparse representation-based method and the deep network learning-based method, usually focus on the information of each pixel vector of the hyperspectral image only, and neglect the correlation information among each pixel vector, which limits the discriminant of the output features. In addition, since deep learning has great advantages in describing non-linearities and deep features of data, attempts to build a deep network with inter-pixel correlation describing capabilities have also provided insight into the ability to improve discriminant features. In addition, while deep features of hyperspectral images are very nonlinear and robust, they tend to be less observable than shallow features. The output features are more discriminative if the deep features and the shallow features of the hyperspectral image can be utilized simultaneously.
In summary, many existing hyperspectral image feature representation methods face the challenges of low information utilization rate such as correlation information description among pixel vectors and deep and shallow feature loss, which is not beneficial to hyperspectral image feature representation.
Disclosure of Invention
The technical problem to be solved by the invention is to provide the hyperspectral image characteristic representation method based on the depth joint sparse-collaborative representation, which can fully utilize the information of the hyperspectral image, thereby improving the discriminant of the output characteristic and providing a reliable basis for the related decision.
In order to solve the technical problems, the invention provides a hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation, which comprises the following steps:
step one, performing matrix expansion on a three-dimensional hyperspectral image to obtain a two-dimensional matrix form H E R of the hyperspectral image L×N Each column represents a pixel sample, the column number N is the size of the spatial resolution of the hyperspectral image, and the line number L is the size of the spectral resolution of the hyperspectral image;
sampling from the matrix, namely extracting a certain number of pixel vectors from various types, and forming a training sampleThe remaining pixel vectors are test samples +.>And n=n r +N e ;
Step three, introducing a multi-layer forward neural network, mapping each sample to a nonlinear feature space, and describing and mining nonlinear relations among the samples in the mode; then, sparse representation and collaborative representation are introduced into each layer of the network, a depth joint sparse-collaborative representation model is established, and sparse coefficients and collaborative coefficients of each layer are learned;
step four, designing an alternate iterative updating algorithm to solve the depth joint sparse-collaborative representation model proposed in the step three;
step five, learning an optimal sparse dictionary from the training sample X by the model through an alternate iterative updating algorithm in the step fourCollaborative dictionary->Neural network parameter θ (m) And then predicting the characteristic representation corresponding to the test sample Y.
Preferably, in the third step, a multi-layer forward neural network is introduced, each sample is mapped to a nonlinear feature space, and nonlinear relations among the samples are described and mined in such a way; then, sparse representation and collaborative representation are introduced into each layer of the network, a depth joint sparse-collaborative representation model is established, and the sparse coefficients and the collaborative coefficients of each layer are learned, namely:
wherein θ= { W (m) ,b (m) M= 1:M } represents parameters of the forward neural network, which include weights and deviations, M being the number of layers of the network; d (D) s And D c Representing a sparse dictionary and a collaborative dictionary; c (C) s And C c Is a sparse coefficient and a collaborative sparsity; ) Is a coefficient; p is p s And p c Is a very small positive number; for neural networks, the output of the mth layer is Wherein O is (0) =x; g (·) represents an activation function, d m Is the dimension output in the m-th layer; further, the above model may be further expressed as:
wherein,,and->Is a sparse dictionary and a collaborative dictionary of the m-th layer; />And->Is the sparse coefficient and the cooperative coefficient of the m-th layer; θ (m) ={W (m) ,b (m) And is a parameter of the network at the m-th layer.
Preferably, in the fourth step, designing an alternative iterative updating algorithm to solve the depth joint sparse-collaborative representation model proposed in the third step specifically includes: transforming the depth joint sparse-collaborative representation model (2) in the third step into the following unconstrained form:
1) Updating theta (m) ={W (m) ,b (m) }: fixingAnd +.>The following forms were obtained:
bond O (m) =G(W (m) O (m-1) +b (m) ) The objective function of (4) is respectively calculated as W (m) And b (m) And taking zero, then using the chain method to obtain the following equation:
wherein,,and->The following forms are respectively adopted:
wherein T is (m) =W (m+1) O (m-1) +b (m) G' (. Cndot.) is the derivative of the activation function G (. Cndot.), and the matrix dot product operator is represented by the term, so that the parameter θ of the neural network (m) ={W (m) ,b (m) The } can be updated by the following equation:
wherein Γ is the step size;
2) Updating dictionaryAnd->Fix θ (m) 、/>And->Then (3) converts to:
solution to the problem (3)And->It can be given directly, namely:
3) Updating coefficientsAnd->Fix θ (m) 、/>And->Then (3) converts to:
introducing auxiliary variablesThen (13) may become:
in this way the first and second light sources,V (m) the method can be obtained by alternately iterating the following sub-problems:
from the least squares optimization solution method, it can be known that the solutions of (14) and (15) are:
wherein I represents an identity matrix, and the solution of (16) can be obtained by a soft thresholding method (soft thresholding), namely:
wherein soft (g, b) =shgn (a) max (|a| -b, 0), shgn (x) is a sign function;
after 1) to 3) iterative optimization, the optimal value can be learnedAnd theta (m) 。
Preferably, in the fifth step, for the test sample Y, it corresponds to the output corresponding to the mth layer of the networkSparse coefficient of->And synergistic coefficient->Calculated as +.>And->Wherein,,and->For the sparse coefficient obtained by calculation->And synergistic coefficient->Weighted summation is carried out to obtain a characteristic representation +.> Wherein 0 is<t<1, a step of; finally, representing the corresponding characteristic of M layers of the network +.>Stacked as a wholeAnd obtaining the Y characteristic representation of the test sample.
The beneficial effects of the invention are as follows: according to the invention, the depth network is adopted to carry out nonlinear mapping, a sparse-collaborative representation model is established at each layer of the depth network, redundancy and nonlinearity of pixel samples and correlation among samples are fully considered, and deep and shallow layer characteristics output by the network are utilized, so that more discriminant characteristics are extracted.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Detailed Description
As shown in fig. 1, a hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation comprises the following steps:
step one, performing matrix expansion on a three-dimensional hyperspectral image to obtain a two-dimensional matrix form H E R of the hyperspectral image L×N Each column represents a pixel sample, the column number N is the size of the spatial resolution of the hyperspectral image, and the line number L is the size of the spectral resolution of the hyperspectral image;
sampling from the matrix, namely extracting a certain number of pixel vectors from various types, and forming a training sampleThe remaining pixel vectors are test samples +.>And n=n r +N e ;
Step three, introducing a multi-layer forward neural network, mapping each sample to a nonlinear feature space, and describing and mining nonlinear relations among the samples in the mode; then, sparse representation and collaborative representation are introduced into each layer of the network, a depth joint sparse-collaborative representation model is established, and sparse coefficients and collaborative coefficients of each layer are learned;
step four, designing an alternate iterative updating algorithm to solve the depth joint sparse-collaborative representation model proposed in the step three;
step five, learning an optimal sparse dictionary from the training sample X by the model through an alternate iterative updating algorithm in the step fourCollaborative dictionary->Neural network parameter θ (m) And then predicting the characteristic representation corresponding to the test sample Y.
Step three, introducing a multi-layer forward neural network, mapping each sample to a nonlinear feature space, and describing and mining nonlinear relations among the samples in the mode; then, sparse representation and collaborative representation are introduced into each layer of the network, a depth joint sparse-collaborative representation model is established, and the sparse coefficients and the collaborative coefficients of each layer are learned, namely:
wherein θ= { W (m) ,b (m) M= 1:M } represents parameters of the forward neural network, which include weights and deviations, M being the number of layers of the network; d (D) s And D c Representing a sparse dictionary and a collaborative dictionary; c (C) s And C c Is a sparse coefficient and a collaborative sparsity; alpha is a coefficient; p is p s And p c Is a very small positive number; for neural networks, the output of the mth layer is Wherein O is (0) =x; g (·) represents an activation function, d m Is the dimension output in the m-th layer; further, the above model may be further expressed as:
wherein,,and->Is a sparse dictionary and a collaborative dictionary of the m-th layer; />And->Is the sparse coefficient and the cooperative coefficient of the m-th layer; θ (m) ={W (m) ,b (m) And is a parameter of the network at the m-th layer.
In the fourth step, an alternate iterative updating algorithm is designed to solve the depth joint sparse-collaborative representation model proposed in the third step, and the method specifically comprises the following steps: transforming the depth joint sparse-collaborative representation model (2) in the third step into the following unconstrained form:
1) Updating theta (m) ={W (m) ,b (m) }: fixingAnd +.>The following forms were obtained:
bond O (m) =G(W (m) O (m-1) +2 (m) ) The objective function of (4) is respectively calculated as W (m) And b (m) And taking zero, then using the chain method to obtain the following equation:
wherein,,and->The following forms are respectively adopted:
wherein T is (m) =W (m+1) O (m;1) +2 (m) G' (. Cndot.) is the derivative of the activation function G (. Cndot.), and the matrix dot product operator is represented by the term, so that the parameter θ of the neural network (m) ={W (m) ,b (m) The } can be updated by the following equation:
wherein Γ is the step size;
2) Updating dictionaryAnd->Fix θ (m) 、/>And->Then (3) converts to:
solution to the problem (3)And->It can be given directly, namely:
3) Updating coefficientsAnd->Fix θ (m) 、/>And->Then (3) converts to:
introducing auxiliary variablesThen (13) may become:
in this way the first and second light sources,V (m) the method can be obtained by alternately iterating the following sub-problems:
from the least squares optimization solution method, it can be known that the solutions of (14) and (15) are:
wherein I represents an identity matrix, and the solution of (16) can be obtained by a soft thresholding method (soft thresholding), namely:
wherein soft (a, b) =shgn (g) max (|g| -b, 0), shgn (x) is a sign function;
after 1) to 3) iterative optimization, the optimal value can be learnedAnd theta (m) 。
In the fifth step, for the test sample Y, it corresponds to the output corresponding to the m-th layer of the networkIs of the sparsity coefficient of (a)And synergistic coefficient->Calculated as +.>And->Wherein,,and->For the sparse coefficient obtained by calculation->And synergistic coefficient->Weighted summation is carried out to obtain a characteristic representation +.> Wherein 0 is<t<1, a step of; finally, representing the corresponding characteristic of M layers of the network +.>Stacked as a wholeAnd obtaining the Y characteristic representation of the test sample.
According to the invention, the depth network is adopted to carry out nonlinear mapping, a sparse-collaborative representation model is established at each layer of the depth network, redundancy and nonlinearity of pixel samples and correlation among samples are fully considered, and deep and shallow layer characteristics output by the network are utilized, so that more discriminant characteristics are extracted.
Claims (3)
1. The hyperspectral image characteristic representation method based on the depth joint sparse-collaborative representation is characterized by comprising the following steps of:
step one, performing matrix expansion on a three-dimensional hyperspectral image to obtain a two-dimensional matrix form H E R of the hyperspectral image L×N Each column represents a pixel sample, the column number N is the size of the spatial resolution of the hyperspectral image, and the line number L is the size of the spectral resolution of the hyperspectral image;
sampling from the matrix, namely extracting a certain number of pixel vectors from various types, and forming a training sampleThe remaining pixel vectors are test samples +.>And n=n r +N e ;
Step three, introducing a multi-layer forward neural network, mapping each sample to a nonlinear feature space, and describing and mining nonlinear relations among the samples in the mode; then, sparse representation and collaborative representation are introduced into each layer of the network, a depth joint sparse-collaborative representation model is established, and sparse coefficients and collaborative coefficients of each layer are learned;
step four, designing an alternate iterative updating algorithm to solve the depth joint sparse-collaborative representation model proposed in the step three;
step five, learning an optimal sparse dictionary from the training sample X by the model through an alternate iterative updating algorithm in the step fourCollaborative dictionary->Neural network parameter θ (m) Then predicting the characteristic representation corresponding to the test sample Y; for test sample Y, then its corresponding output of the mth layer of the corresponding network +.>Sparse coefficient of->And synergistic coefficient->Calculated as +.>And->Wherein (1)>And->For the sparse coefficient obtained by calculation->And synergistic coefficient->Weighted summation is carried out to obtain a characteristic representation +.> Wherein 0 is<t<1, a step of; finally, representing the corresponding characteristic of M layers of the network +.>Stacked as a whole->And obtaining the Y characteristic representation of the test sample.
2. The hyperspectral image feature representation method based on depth joint sparse-collaborative representation according to claim 1, wherein in step three, a multi-layer forward neural network is introduced to map each sample to a nonlinear feature space, in such a way that nonlinear relationships between samples are described and mined; then, sparse representation and collaborative representation are introduced into each layer of the network, a depth joint sparse-collaborative representation model is established, and the sparse coefficients and the collaborative coefficients of each layer are learned, namely:
wherein θ= { W (m) ,b (m) M= 1:M } represents parameters of the forward neural network, which include weights and deviations, M being the number of layers of the network; d (D) s And D c Representing a sparse dictionary and a collaborative dictionary; c (C) s And C c Is a sparse coefficient and a collaborative sparsity; alpha is a coefficient; p is p s And p c Is a very small positive number; for neural networks, the output of the mth layer is Wherein O is (0) =x; g (·) represents an activation function, d m Is the dimension output in the m-th layer; further, the above model is further expressed as:
wherein,,and->Is a sparse dictionary and a collaborative dictionary of the m-th layer; />And->Is the sparse coefficient and the cooperative coefficient of the m-th layer; θ (m) ={W (m) ,b (m) And is a parameter of the network at the m-th layer.
3. The hyperspectral image feature representation method based on depth joint sparse-collaborative representation according to claim 1, wherein in the fourth step, designing an alternate iterative update algorithm to solve the depth joint sparse-collaborative representation model proposed in the third step is specifically as follows: transforming the depth joint sparse-collaborative representation model (2) in the third step into the following unconstrained form:
1) Updating theta (m) ={W (m) ,b (m) }: fixingAnd +.>The following forms were obtained:
bond O (m) =G(W (m) O (m-1) +b (m) ) The objective function of (4) is respectively calculated as W (m) And b (m) And taking zero, then using the chain method to obtain the following equation:
wherein,,and->The following forms are respectively adopted:
wherein T is (m) =W (m+1) O (m-1) +b (m) ,G ′ (. Cndot.) is the derivative of the activation function G (. Cndot.). Cndot. (m) ={W (m) ,b 9m) The } is updated by the following equation:
wherein Γ is the step size;
2) Updating dictionaryAnd->Fix θ (m) 、/>And->Then (3) converts to:
solution to the problem (3)And->Directly given, namely:
3) Updating coefficientsAnd->Fix θ (m) 、/>And->Then (3) converts to:
introducing auxiliary variablesThen (13) becomes:
in this way the first and second light sources,f (m) the method is obtained through the following alternate iteration of the sub-problems:
according to the least squares optimization solution method, the solutions of (14) and (15) are known as:
wherein I represents an identity matrix, and the solution of (16) is obtained by a soft threshold method, namely:
wherein soft (a, b) =sign (a) max (|a| -b, 0), sign (x) is a sign function;
after 1) to 3) iterative optimization, learning to be optimalAnd theta (m) 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111492113.4A CN113989528B (en) | 2021-12-08 | 2021-12-08 | Hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111492113.4A CN113989528B (en) | 2021-12-08 | 2021-12-08 | Hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113989528A CN113989528A (en) | 2022-01-28 |
CN113989528B true CN113989528B (en) | 2023-07-25 |
Family
ID=79733488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111492113.4A Active CN113989528B (en) | 2021-12-08 | 2021-12-08 | Hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113989528B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114998622B (en) * | 2022-06-01 | 2023-09-29 | 南京航空航天大学 | Hyperspectral image feature extraction method based on kernel Taylor decomposition |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160310A (en) * | 2020-01-02 | 2020-05-15 | 西北工业大学 | Hyperspectral abnormal target detection method based on self-weight collaborative representation |
CN112750091A (en) * | 2021-01-12 | 2021-05-04 | 云南电网有限责任公司电力科学研究院 | Hyperspectral image unmixing method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105139028B (en) * | 2015-08-13 | 2018-05-25 | 西安电子科技大学 | SAR image sorting technique based on layering sparseness filtering convolutional neural networks |
CN105469360B (en) * | 2015-12-25 | 2018-11-30 | 西北工业大学 | The high spectrum image super resolution ratio reconstruction method indicated based on non local joint sparse |
CN108108719A (en) * | 2018-01-05 | 2018-06-01 | 重庆邮电大学 | A kind of Weighted Kernel is sparse and cooperates with the Hyperspectral Image Classification method for representing coefficient |
CN110717354B (en) * | 2018-07-11 | 2023-05-12 | 哈尔滨工业大学 | Super-pixel classification method based on semi-supervised K-SVD and multi-scale sparse representation |
-
2021
- 2021-12-08 CN CN202111492113.4A patent/CN113989528B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160310A (en) * | 2020-01-02 | 2020-05-15 | 西北工业大学 | Hyperspectral abnormal target detection method based on self-weight collaborative representation |
CN112750091A (en) * | 2021-01-12 | 2021-05-04 | 云南电网有限责任公司电力科学研究院 | Hyperspectral image unmixing method |
Also Published As
Publication number | Publication date |
---|---|
CN113989528A (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368896B (en) | Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network | |
CN111860612B (en) | Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method | |
CN109949317B (en) | Semi-supervised image example segmentation method based on gradual confrontation learning | |
CN111274869B (en) | Method for classifying hyperspectral images based on parallel attention mechanism residual error network | |
CN108564109B (en) | Remote sensing image target detection method based on deep learning | |
CN109993072B (en) | Low-resolution pedestrian re-identification system and method based on super-resolution image generation | |
CN111695467B (en) | Spatial spectrum full convolution hyperspectral image classification method based on super-pixel sample expansion | |
CN113469094A (en) | Multi-mode remote sensing data depth fusion-based earth surface coverage classification method | |
CN110414616B (en) | Remote sensing image dictionary learning and classifying method utilizing spatial relationship | |
CN111401426B (en) | Small sample hyperspectral image classification method based on pseudo label learning | |
CN109447123B (en) | Pedestrian re-identification method based on label consistency constraint and stretching regularization dictionary learning | |
CN115496928B (en) | Multi-modal image feature matching method based on multi-feature matching | |
CN112163401B (en) | Compression and excitation-based Chinese character font generation method of GAN network | |
CN112800876A (en) | Method and system for embedding hypersphere features for re-identification | |
CN110717953A (en) | Black-white picture coloring method and system based on CNN-LSTM combined model | |
CN104700100A (en) | Feature extraction method for high spatial resolution remote sensing big data | |
CN114724155A (en) | Scene text detection method, system and equipment based on deep convolutional neural network | |
CN114820655A (en) | Weak supervision building segmentation method taking reliable area as attention mechanism supervision | |
CN113989528B (en) | Hyperspectral image characteristic representation method based on depth joint sparse-collaborative representation | |
CN111008570B (en) | Video understanding method based on compression-excitation pseudo-three-dimensional network | |
CN112990340B (en) | Self-learning migration method based on feature sharing | |
CN114581789A (en) | Hyperspectral image classification method and system | |
CN116630637A (en) | optical-SAR image joint interpretation method based on multi-modal contrast learning | |
CN116188428A (en) | Bridging multi-source domain self-adaptive cross-domain histopathological image recognition method | |
CN114998622A (en) | Hyperspectral image feature extraction method based on kernel Taylor decomposition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |