CN116563639A - Lightweight multi-scale dense network hyperspectral remote sensing image classification method - Google Patents
Lightweight multi-scale dense network hyperspectral remote sensing image classification method Download PDFInfo
- Publication number
- CN116563639A CN116563639A CN202310591613.6A CN202310591613A CN116563639A CN 116563639 A CN116563639 A CN 116563639A CN 202310591613 A CN202310591613 A CN 202310591613A CN 116563639 A CN116563639 A CN 116563639A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- hyperspectral remote
- space
- spectrum
- dimension
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000001228 spectrum Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 20
- 230000009467 reduction Effects 0.000 claims abstract description 20
- 238000010586 diagram Methods 0.000 claims abstract description 15
- 238000004364 calculation method Methods 0.000 claims abstract description 8
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 230000002776 aggregation Effects 0.000 claims description 6
- 238000004220 aggregation Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000012847 principal component analysis method Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 230000001537 neural effect Effects 0.000 claims 1
- 238000013527 convolutional neural network Methods 0.000 description 10
- 230000003595 spectral effect Effects 0.000 description 8
- 238000012360 testing method Methods 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 235000008331 Pinus X rigitaeda Nutrition 0.000 description 1
- 235000011613 Pinus brutia Nutrition 0.000 description 1
- 241000018646 Pinus brutia Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008033 biological extinction Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Remote Sensing (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a lightweight multi-scale dense network hyperspectral remote sensing image classification method which is characterized by comprising the following steps of: s1: performing dimension reduction processing on the hyperspectral remote sensing image data to obtain dimension reduced image data; s2: inputting the dimension-reduced image data into an SMSDN network, and extracting the space-spectrum characteristics of the image to obtain a space-spectrum characteristic diagram of the image; s3: and sending the obtained space-spectrum characteristic diagram into a full connection layer, converting the three-dimensional characteristic diagram output by convolution into a one-dimensional characteristic vector, and mapping the characteristic space obtained by calculation of a front layer into a sample space. The invention relates to the field of image processing, in particular to a lightweight multi-scale dense network hyperspectral remote sensing image classification method. The invention aims to solve the technical problem of providing a lightweight multi-scale dense network hyperspectral remote sensing image classification method, which is capable of better extracting characteristic information, reducing calculated amount and keeping image classification precision.
Description
Technical Field
The invention relates to the field of image processing, in particular to a lightweight multi-scale dense network hyperspectral remote sensing image classification method.
Background
With the development of a spectrum imager, a hyperspectral sensor can provide images with hyperspectral and spatial resolution, and can detect two-dimensional space information and one-dimensional spectrum information of a target at the same time, so that 'map integration' is realized. Because of the form of integration of hyperspectral remote sensing image (HSI) data patterns, hyperspectral remote sensing image (HSI) analysis has been widely used in different fields such as agriculture and land change monitoring, atmospheric geography and mineral exploitation. And hyperspectral sensors provide continuous spectral curves that many conventional spectral sensors cannot provide. The curve responded by each pixel in the hyperspectral remote sensing image (HSI) in different electromagnetic wave bands reflects the reflection, radiation and other spectral information of the target ground object in the corresponding spectral wave band. The spectral response curves for different features are significantly different. Therefore, hyperspectral remote sensing image classification becomes one of the research hotspots more and more.
In recent years, with the development of Convolutional Neural Networks (CNNs), some convolutional models are applied in hyperspectral remote sensing image classification. And simultaneously extracting spatial and spectral information features from the original HSI three-dimensional data by using a two-dimensional convolutional neural network (2D-CNN) and a three-dimensional convolutional neural network (3D-CNN) to realize complete fusion of spatial and spectral information. However, since CNN is a simple convolution kernel stack, gradient extinction or gradient explosion is caused with increasing number of convolution layers, and there is no effective feature aggregation.
In order to solve the above problems, researchers have proposed 3D-DenseNet to process hyperspectral remote sensing image classification and learn more powerful spectral-spatial information. However, in the feature aggregation process of acquiring the HSI features, deep features extracted in the last layer directly enter the classifier, and shallow features have weak influence on the classifier. Therefore, a multi-scale dense network (MSDN) is proposed, a three-dimensional network is constructed, a model is shown in fig. 1, the spectral space information is extracted simultaneously, features are further aggregated, and one-to-two feature aggregation is not performed at the end of the model. However, maintaining all finer ratios before the last layer of the network is inefficient, and occupies a lot of memory, increasing the amount of computation.
Aiming at the problems, the invention provides a hyperspectral remote sensing image classification method of a lightweight multi-scale dense network (Short Multiscale Dense Network, SMSDN). Deep extraction of HSI features is performed using three-dimensional convolution of dense connections in the horizontal direction and downsampling operations using Max Pooling in the vertical direction to generate three scale feature maps of low, medium, and high levels. Second, the network is simplified on this basis, divided into blocks (blocks) in the horizontal direction, and the coarsest proportion is maintained only in the first block. Compared with other methods, the method can reduce the calculated amount and keep the precision unchanged.
Disclosure of Invention
The invention aims to solve the technical problem of providing a lightweight multi-scale dense network hyperspectral remote sensing image classification method, which is capable of better extracting characteristic information, reducing calculated amount and keeping image classification precision.
The invention adopts the following technical scheme to realize the aim of the invention:
the light-weight multi-scale dense network hyperspectral remote sensing image classification method is characterized by comprising the following steps of:
s1: performing dimension reduction processing on the hyperspectral remote sensing image data to obtain dimension reduced image data;
s2: inputting the dimension-reduced image data into an SMSDN network, and extracting the space-spectrum characteristics of the image to obtain a space-spectrum characteristic diagram of the image;
s3: sending the obtained space-spectrum characteristic diagram into a full connection layer, converting the three-dimensional characteristic diagram output by convolution into a one-dimensional characteristic vector, and mapping the characteristic space obtained by calculation of a front layer into a sample space;
s4: optimizing the extracted space-spectrum characteristics by using an optimizer;
s5: and classifying the space-spectrum characteristics by using a Softmax classifier to obtain a classification result.
As a further limitation of the technical scheme, the dimension reduction processing for the hyperspectral remote sensing image comprises the following steps: and processing the hyperspectral remote sensing image by adopting a principal component analysis method to reduce the dimension of the spectrum, wherein the number of principal components of the spectrum after dimension reduction is 30.
As a further limitation of the present technical solution, the S2 specifically includes:
s201: in the horizontal direction, performing depth extraction of the HSI features by using three-dimensional convolution dense connection, and placing a transition layer after aggregation of each dense connection feature, so that the number of parameters in the dense structure is reduced;
s202: in the vertical direction, performing downsampling operation by using maximum pooling to generate three scale feature maps of low, medium and high levels;
s203: dividing the network in the horizontal directionBlock, only at->The coarsest->Proportion.
As a further limitation of the present solution, the activation function of the fully connected layer uses a probability activation function:
(1)
wherein: the nonlinear result of the final output, representing the linear transformation undergone by the input vector at one level on the neural network, depends on the current location of the neuron in the network structure.
As a further limitation of the technical scheme, in S4, an Adam optimizer is selected, a back propagation algorithm of the optimizer is trained by using a classification loss function, and parameters are updated by back propagation;
the classification loss function is expressed as follows:
(2)
wherein:representing the true value;
representing the predicted value;
representing the total number of small batches of samples;
representing the total number of feature coverage classes.
As a further limitation of the present solution, the following three modules are employed:
the dimension reduction module is used for carrying out dimension reduction processing on the spectrum dimension information of the hyperspectral remote sensing image, so that data redundancy is reduced;
the feature extraction module is used for simultaneously extracting the space and spectrum dimension features of the dimension-reduced hyperspectral remote sensing image in the horizontal direction and the vertical direction by using the SMSDN network to obtain a space-spectrum feature map;
and the optimizing and classifying module is used for processing the image extracted with the key information by utilizing the optimizer to obtain a final classifying result.
Compared with the prior art, the invention has the advantages and positive effects that:
the hyperspectral remote sensing image has the characteristic of 'map unification', so that the key of hyperspectral remote sensing image classification is extraction of image space-spectrum characteristics, but MSDN keeps all finer proportions to be inefficient before the last layer of a network, occupies a large amount of memory, and increases the calculation amount. The invention provides a hyperspectral remote sensing image classification method of a lightweight multi-scale dense network, which can solve the problem that the MSDN has limitation on feature extraction and effectively reduces the calculated amount. Experiments show that the method can reduce the calculated amount compared with other methods, and the image classification accuracy is maintained.
Drawings
Fig. 1 is a model diagram of a hyperspectral remote sensing image classification method for a multiscale dense network.
Fig. 2 is a flowchart of a method for classifying hyperspectral remote sensing images of a lightweight multi-scale dense network according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of comparison of classification results of a data set in a hyperspectral remote sensing image classification method of a lightweight multi-scale dense network according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of comparison of classification results of a data set in a hyperspectral remote sensing image classification method of a lightweight multi-scale dense network according to an embodiment of the present invention.
Detailed Description
One embodiment of the present invention will be described in detail below with reference to the attached drawings, but it should be understood that the scope of the present invention is not limited by the embodiment.
The embodiment provides a method for classifying hyperspectral remote sensing images of a lightweight multi-scale dense network (Short Multiscale Dense Network, SMSDN). The general flow chart is shown in fig. 2, and the specific steps are as follows:
s1: and performing dimension reduction processing on the hyperspectral remote sensing image data to obtain dimension reduced image data.
The dimension reduction processing for the hyperspectral remote sensing image comprises the following steps: and processing the hyperspectral remote sensing image by adopting a principal component analysis method to reduce the dimension of the spectrum, wherein the number of principal components of the spectrum after dimension reduction is 30.
Specifically, the main component analysis PCA is adopted to perform dimension reduction processing on the hyperspectral remote sensing image, and the dimension reduction processing converts a large variable set of the hyperspectral remote sensing image into a smaller variable set, and simultaneously retains most of information in the spectrum of the image.
S11: input original data with the structure ofFind out the original +.>A>Dimension space;
S12: determining feature quantity after dimension reductionHere->The method comprises the steps of carrying out a first treatment on the surface of the (empirically valued)
S13: and analyzing the hyperspectral remote sensing image through a PCA function in sklean.
S2: and inputting the image data subjected to the dimension reduction into an SMSDN network, and extracting the space-spectrum characteristics of the image to obtain a space-spectrum characteristic diagram of the image.
The step S2 specifically comprises the following steps:
s201: in the horizontal direction, performing depth extraction of the HSI features by using three-dimensional convolution dense connection, and placing a transition layer after aggregation of each dense connection feature, so that the number of parameters in the dense structure is reduced;
s202: in the vertical direction, performing downsampling operation by using maximum pooling to generate three scale feature maps of low, medium and high levels;
s203: dividing the network in the horizontal directionBlock, only at->The coarsest->Proportion.
Specifically, firstly, the feature of the image after dimension reduction is extracted through convolution operation, and the image sequentially passes through two layers of convolution with the sizes of 64 and 128 convolution kernels of 3 multiplied by 3 respectively, to ensure that the feature map size is not changed, the step size of each layer is set to 1.
Then, the second pass uses this convolution block to perform two convolutions, the input of the second pass being the fusion feature of the output feature of the last convolution block and the output feature of the first pass.
Finally, downsampling again by maximum Pooling (Max Pooling), generating high-level features, a convolution block of a third path consisting of 96 1 x 1 and 128 3 x 3 convolutions is input, the third path convolutions three times using this convolution block, the input of the third convolution is the fusion of the downsampled output of the first convolution of the second way, the output of the first convolution of the third way, the output characteristic of the second convolution of the third way and the output characteristic of the second convolution of the second way respectively.
S3: and sending the obtained space-spectrum characteristic diagram into a full connection layer (FC), converting the three-dimensional characteristic diagram output by convolution into a one-dimensional characteristic vector, and mapping the characteristic space obtained by calculation of the previous layer into a sample space.
The activation function of the fully connected layer uses a probability activation function:
(1)
wherein: the nonlinear result of the final output, representing the linear transformation undergone by the input vector at one level on the neural network, depends on the current location of the neuron in the network structure.
S4: optimizing the extracted space-spectrum characteristics by using an optimizer to obtain a classification result, wherein the method specifically comprises the following steps:
in the step S4, an Adam optimizer is selected, a back propagation algorithm of the optimizer is trained by using a classification loss function (Softmax loss), and parameters are updated by back propagation;
the classification loss function is expressed as follows:
(2)
wherein:representing the true value;
representing the predicted value;
representing the total number of small batches of samples;
representing the total number of feature coverage classes.
The Adam optimizer has high computational efficiency and small memory requirements.
S5: and classifying the space-spectrum characteristics by using a Softmax classifier to obtain a classification result.
A number of experiments were performed on Indian pine (Indian pins) and University of Pavia (University). The classification pass was evaluated using Kappa coefficient, OA (overall accuracy) and AA (average accuracy), specifically including:
the Kappa coefficient represents the consistency detection index coefficient of the classification prediction category and the actual category of the test sample, the value range is [ -1,1], and the higher the value of the Kappa coefficient is, the higher the classification accuracy realized by the representative model is, namely the higher the consistency of the actual category and the prediction sample category of the sample is. The calculation formula is as follows:
(3)
wherein:indicate->Total number of test samples for a row;
indicate->Total number of test samples for a column;
what represents the total number of test samples for the row;
representing the category number;
representing the total number of test samples;
OA is the ratio of the number of correctly classified samples to the number of all samples, and is an important index for measuring the accuracy of overall classification, and the higher the overall accuracy is, the better the prediction capability of the network model is proved, the more accurate the classification is, and the Overall Accuracy (OA) calculation formula is as follows:
(4)
wherein:what represents->Go->Total number of test samples for a column;
representing the category number;
representing the total number of test samples;
AA refers to an average value of classification accuracy in each category, namely an average value of a ratio of the number of correctly classified samples of each ground feature category to the total number of samples of each category, the average accuracy is used for measuring the classification accuracy of a network model from a category level, the higher the average accuracy is, the more accurate the classification of each category of the model is, and an AA calculation formula is as follows:
(5)
wherein:indicate->The total classification accuracy of the class.
S6: in order to compare the effectiveness and superiority of the method of the above embodiment with the current classification model, the comparison result of the Indian pins dataset is shown in fig. 3, where (a) is the original hyperspectral remote sensing image, (b) is the classification image using 3D-CNN, (c) is the classification result of the hybrid sn network, (D) is the classification result of the MSDN network, and (e) is the classification result of the method described in this embodiment. The comparison result of the Pavia University dataset is shown in fig. 4, where (a) is an original hyperspectral remote sensing image, (b) is a classification image using 3D-CNN, (c) is a classification result of the hybrid sn network, (D) is a classification result of the MSDN network, and (e) is a classification result of the method described in this embodiment.
Table 1 below shows the AA, OA, kappa coefficients for each feature of the Indian pins dataset for the different networks and table 2 below shows the AA, OA, kappa coefficients for each feature of the Pavia University dataset for the different networks.
TABLE 1 accuracy of classification of various features and AA, OA, kappa coefficients for different networks in Indian pins dataset
It can be seen from table 1 that the classification method described for the data set IP example is superior to other models in terms of not only Kappa, OA, AA three evaluation metrics, i.e., has higher classification accuracy, but is also relatively short in terms of experimental use. The method described in this example is nearly four times faster than the multi-scale dense network (MSDN) prior to light weight, greatly reducing the time-to-use.
TABLE 2 accuracy of classification of various features and AA, OA, kappa coefficients for different networks in Pavia University dataset
It can be seen from table 2 that the classification method described in this example for the data set PU is improved over 3D-CNN over Kappa, OA, AA for all three evaluation indexes, a better classification result is obtained, and the time of use is also shortened. Compared with the hybrid SN and the multi-scale dense network (MSDN), the method has the capability of classifying the data in the three evaluation indexes of Kappa, OA and AA, but the classification method of the data set with larger data volume like PU has better performance in the aspect of shortening the experimental time, and is four times shorter than the experimental time of the MSDN network.
A lightweight multi-scale dense network hyperspectral remote sensing image classification method adopts the following three modules:
the dimension reduction module is used for carrying out dimension reduction processing on the spectrum dimension information of the hyperspectral remote sensing image, so that data redundancy is reduced;
the feature extraction module is used for simultaneously extracting the space and spectrum dimension features of the dimension-reduced hyperspectral remote sensing image in the horizontal direction and the vertical direction by using the SMSDN network to obtain a space-spectrum feature map;
and the optimizing and classifying module is used for processing the image extracted with the key information by utilizing the optimizer to obtain a final classifying result.
The above disclosure is merely illustrative of specific embodiments of the present invention, but the present invention is not limited thereto, and any variations that can be considered by those skilled in the art should fall within the scope of the present invention.
Claims (6)
1. The light-weight multi-scale dense network hyperspectral remote sensing image classification method is characterized by comprising the following steps of:
s1: performing dimension reduction processing on the hyperspectral remote sensing image data to obtain dimension reduced image data;
s2: inputting the dimension-reduced image data into an SMSDN network, and extracting the space-spectrum characteristics of the image to obtain a space-spectrum characteristic diagram of the image;
s3: sending the obtained space-spectrum characteristic diagram into a full connection layer, converting the three-dimensional characteristic diagram output by convolution into a one-dimensional characteristic vector, and mapping the characteristic space obtained by calculation of a front layer into a sample space;
s4: optimizing the extracted space-spectrum characteristics by using an optimizer;
s5: and classifying the space-spectrum characteristics by using a Softmax classifier to obtain a classification result.
2. The method for classifying the hyperspectral remote sensing images of the lightweight multi-scale dense network according to claim 1, which is characterized in that: the dimension reduction processing for the hyperspectral remote sensing image comprises the following steps: and processing the hyperspectral remote sensing image by adopting a principal component analysis method to reduce the dimension of the spectrum, wherein the number of principal components of the spectrum after dimension reduction is 30.
3. The method for classifying the hyperspectral remote sensing images of the lightweight multi-scale dense network according to claim 2, which is characterized in that: the step S2 specifically comprises the following steps:
s201: in the horizontal direction, performing depth extraction of the HSI features by using three-dimensional convolution dense connection, and placing a transition layer after aggregation of each dense connection feature, so that the number of parameters in the dense structure is reduced;
s202: in the vertical direction, performing downsampling operation by using maximum pooling to generate three scale feature maps of low, medium and high levels;
s203: dividing the network in the horizontal directionBlock, only at->The coarsest->Proportion.
4. The method for classifying the hyperspectral remote sensing images of the lightweight multi-scale dense network according to claim 3, which is characterized in that: activation function usage of the full connection layerProbability activation function:
(1)
wherein:input representing a layer on a neural networkVector->The linear transformation that is undergone, the nonlinear result that is output last depends on the current location of the neuron in the network structure.
5. The method for classifying the hyperspectral remote sensing images of the lightweight multi-scale dense network according to claim 4, which is characterized in that: in the step S4, an Adam optimizer is selected, a back propagation algorithm of the optimizer is trained by using a classification loss function, and parameters are updated by back propagation;
the classification loss function is expressed as follows:
(2)
wherein:representing the true value;
representing the predicted value;
representing the total number of small batches of samples;
representing the total number of feature coverage classes.
6. The method for classifying the hyperspectral remote sensing images of the lightweight multi-scale dense network according to claim 1, which is characterized in that: the following three modules are adopted:
the dimension reduction module is used for carrying out dimension reduction processing on the spectrum dimension information of the hyperspectral remote sensing image, so that data redundancy is reduced;
the feature extraction module is used for simultaneously extracting the space and spectrum dimension features of the dimension-reduced hyperspectral remote sensing image in the horizontal direction and the vertical direction by using the SMSDN network to obtain a space-spectrum feature map;
and the optimizing and classifying module is used for processing the image extracted with the key information by utilizing the optimizer to obtain a final classifying result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310591613.6A CN116563639A (en) | 2023-05-24 | 2023-05-24 | Lightweight multi-scale dense network hyperspectral remote sensing image classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310591613.6A CN116563639A (en) | 2023-05-24 | 2023-05-24 | Lightweight multi-scale dense network hyperspectral remote sensing image classification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116563639A true CN116563639A (en) | 2023-08-08 |
Family
ID=87501672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310591613.6A Pending CN116563639A (en) | 2023-05-24 | 2023-05-24 | Lightweight multi-scale dense network hyperspectral remote sensing image classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116563639A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111127374A (en) * | 2019-11-22 | 2020-05-08 | 西北大学 | Pan-sharing method based on multi-scale dense network |
CN113011499A (en) * | 2021-03-22 | 2021-06-22 | 安徽大学 | Hyperspectral remote sensing image classification method based on double-attention machine system |
CN113705526A (en) * | 2021-09-07 | 2021-11-26 | 安徽大学 | Hyperspectral remote sensing image classification method |
-
2023
- 2023-05-24 CN CN202310591613.6A patent/CN116563639A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111127374A (en) * | 2019-11-22 | 2020-05-08 | 西北大学 | Pan-sharing method based on multi-scale dense network |
CN113011499A (en) * | 2021-03-22 | 2021-06-22 | 安徽大学 | Hyperspectral remote sensing image classification method based on double-attention machine system |
CN113705526A (en) * | 2021-09-07 | 2021-11-26 | 安徽大学 | Hyperspectral remote sensing image classification method |
Non-Patent Citations (3)
Title |
---|
CHUNJU ZHANG 等: "Multi-Scale Dense Networks for Hyperspectral Remote Sensing Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 57, no. 11, 1 August 2019 (2019-08-01), pages 9201 - 9222, XP011755632, DOI: 10.1109/TGRS.2019.2925615 * |
GAO HUANG 等: "Multi-Scale Dense Networks for Resource Efficient Image Classification", ARXIV, 7 June 2018 (2018-06-07), pages 1 - 14 * |
刘海艳: "基于深度学习的高光谱遥感影像分类方法", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 04, 15 April 2022 (2022-04-15), pages 028 - 112 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111191736B (en) | Hyperspectral image classification method based on depth feature cross fusion | |
CN110135267B (en) | Large-scene SAR image fine target detection method | |
CN112733749B (en) | Real-time pedestrian detection method integrating attention mechanism | |
Zhao et al. | TBC-Net: A real-time detector for infrared small target detection using semantic constraint | |
Han et al. | Joint spatial-spectral hyperspectral image classification based on convolutional neural network | |
CN111310666B (en) | High-resolution image ground feature identification and segmentation method based on texture features | |
CN113095409B (en) | Hyperspectral image classification method based on attention mechanism and weight sharing | |
CN111191514A (en) | Hyperspectral image band selection method based on deep learning | |
CN110619352A (en) | Typical infrared target classification method based on deep convolutional neural network | |
CN108229551B (en) | Hyperspectral remote sensing image classification method based on compact dictionary sparse representation | |
Diakite et al. | Hyperspectral image classification using 3D 2D CNN | |
CN111667019A (en) | Hyperspectral image classification method based on deformable separation convolution | |
CN115240072A (en) | Hyperspectral multi-class change detection method based on multidirectional multi-scale spectrum-space residual convolution neural network | |
CN112733736A (en) | Class imbalance hyperspectral image classification method based on enhanced oversampling | |
CN111639697A (en) | Hyperspectral image classification method based on non-repeated sampling and prototype network | |
CN115272865A (en) | Target detection method based on adaptive activation function and attention mechanism | |
Song et al. | Multi-source remote sensing image classification based on two-channel densely connected convolutional networks. | |
Yuan et al. | Mslm-rf: A spatial feature enhanced random forest for on-board hyperspectral image classification | |
CN111563525A (en) | Moving target detection method based on YOLOv3-Tiny | |
CN114821341A (en) | Remote sensing small target detection method based on double attention of FPN and PAN network | |
CN116977747B (en) | Small sample hyperspectral classification method based on multipath multi-scale feature twin network | |
CN115546569B (en) | Attention mechanism-based data classification optimization method and related equipment | |
Yaman et al. | Image processing and machine learning‐based classification method for hyperspectral images | |
CN116563639A (en) | Lightweight multi-scale dense network hyperspectral remote sensing image classification method | |
Rani et al. | Hyperspectral image classification using a new deep learning model based on pseudo-3D block and depth separable 2D–3D convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |