CN110533077A - Form adaptive convolution deep neural network method for classification hyperspectral imagery - Google Patents

Form adaptive convolution deep neural network method for classification hyperspectral imagery Download PDF

Info

Publication number
CN110533077A
CN110533077A CN201910709042.5A CN201910709042A CN110533077A CN 110533077 A CN110533077 A CN 110533077A CN 201910709042 A CN201910709042 A CN 201910709042A CN 110533077 A CN110533077 A CN 110533077A
Authority
CN
China
Prior art keywords
indicate
convolution
sky
guiding
spectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910709042.5A
Other languages
Chinese (zh)
Other versions
CN110533077B (en
Inventor
肖亮
刘启超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN201910709042.5A priority Critical patent/CN110533077B/en
Publication of CN110533077A publication Critical patent/CN110533077A/en
Application granted granted Critical
Publication of CN110533077B publication Critical patent/CN110533077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of form adaptive convolution deep neural network methods for classification hyperspectral imagery, this method comprises: spatial structural form is taken to learn branch;It is taken based on the form adaptive convolution kernel of guiding figure and can train;One-dimensional convolutional layer is tieed up by spectrum and space dimension two-dimensional convolution layer constitutes sky-spectrum signature extraction unit, each unit is gathered around there are two inputting, respectively characteristic pattern and guiding figure;Depth network is stacked by multiple skies-spectrum signature extraction unit, and skip floor connection is established between every two feature extraction unit;Network losses function is weighting cross entropy.The present invention passes through the spatial coherence between adjacent picture elements in study sky-modal data, the acceptance region shape of convolution algorithm can be adaptively adjusted according to the space structure relationship between explicit definition pixel, the defect of anisotropic character cannot be captured by overcoming fixed rectangular convolution, and excellent classification and Generalization Capability are all had to the high spectrum image of different resolution different scenes complexity.

Description

Form adaptive convolution deep neural network method for classification hyperspectral imagery
Technical field
The present invention relates to classification hyperspectral imagery technologies, and in particular to a kind of shape for classification hyperspectral imagery is adaptive Answer convolution deep neural network method.
Background technique
EO-1 hyperion phase function obtains material information cube abundant " collection of illustrative plates " data, in visible light-near-infrared, short Wave it is infrared in addition in can have nanometer (nm) grade spectral resolution within the scope of infrared and Thermal infrared bands, there are a companies up to up to a hundred Continuous, narrow-band spectral band image, is widely used in the fields such as military surveillance, environmental monitoring, geological prospecting and target acquisition. Wherein high spectrum image (HSI) supervised classification is one of most important research contents in the field.
In the past decade, researcher has been presented for many supervised classification methods for being used for HSI.From based on the simple of statistics For model to the complicated approach based on character representation, HSI, which classifies, has become a targeted research contents of remote sensing fields.It is logical With classification method, (LR or NLR), support vector machines (SVM), extreme learning machine (ELM) and multicore are such as linearly or nonlinearly returned Learn (MKL) etc., only roughly can divide spectroscopic data in certain higher dimensional space and not denote that the distinctive feature of spectrum.In order to Explore high-spectral data structure, the method based on character representation, as rarefaction representation (SR), dictionary learning (DL), manifold learning, It is special that wavelet transformation, principal component analysis (PCA) and linear discriminant analysis (LDA) etc. disclose spectrum identification to a certain extent Sign.However, the noise as caused by low image quality (such as low resolution, illumination, shooting angle etc.) and coarse mark etc. is led Cause some pixels to belong to a different category that there is the phenomenon that same or similar spectrum.It is influenced to mitigate these noise brings, Researcher proposes using the aggregation attribute of pixel in homogeneous region the classification method for promoting the flatness of classification chart, i.e., Sky-spectrum joint classification method.The method extracted based on sky-spectrum signature and the method based on post-processing are two kinds of common sky-spectrums Joint classification method.Wherein, sky-spectrum signature extracting method usually utilizes Gabor characteristic, morphological feature and textural characteristics etc. artificial Feature characterizes the space structure of HSI.In addition, the method based on post-processing, such as Markov random field (MRF), part classification Ballot and the pixel that partial error classification is corrected using the priori of the partial polymerization of pixel such as learn again, it is final to improve classification Accuracy.
However, the classification method based on manual features has certain office when handling different types of high-spectral data It is sex-limited.For example, for certain data sets have suitable parameters configuration method may to obtained by different types of camera its Its categorical data is performed poor.In other words, most of traditional classification methods do not have enough generalization abilities.Lucky It is that deep learning method can learn layered characteristic expression directly from initial data, this provides another kind for the above problem Effective solution scheme.Researcher has carried out a large amount of explorations, some typical deep learning methods, such as convolutional neural networks (CNNs), deepness belief network (DBNs) and stacking autocoder (SAE) etc. have been applied to HSI classification.Although deep learning With powerful feature learning and indicate ability, but traditional HSI deep learning classification framework still has limitation.Specifically Ground shows good tradition CNNs on 2D data (such as image) and is difficult to handle 3D data (such as HSIs) well.This is because The spatial structural form of HSI exists only in local space region rather than in global space domain, and spectrum is for differentiator The primary information resource of matter, and spatial information only helps out.Many deep learning methods are adjacent by HSI due to the above reasons, Domain block is inputted as algorithm, to utilize spatially and spectrally information simultaneously.However, being used for the standard CNN that HSI classifies, there are significant Disadvantage.Specifically, due to the fixed geometirc structure of CNN module, convolution unit samples input feature vector figure in fixed position, Interference information can be introduced into the calculating of Pixel-level feature extraction by this, and lead to the picture to the near border between different materials Element carries out wrong classification.It is obvious the result is that classification chart becomes excessively smooth and is lost many scenes caused by the CNN defect Detailed information, HSI classifying quality abundant for scene details be poor.
Summary of the invention
The purpose of the present invention is to provide a kind of form adaptive convolution depth nerve nets for classification hyperspectral imagery Network method.
The technical solution for realizing the aim of the invention is as follows: a kind of form adaptive convolution for classification hyperspectral imagery Deep neural network method, comprising the following steps:
The first step using the spatial structural form of a convolutional neural networks branch study high spectrum image, and is stored in Guiding figure;
Second step, constructs form adaptive convolution, and cooperation guiding figure extracts anisotropic sky-spectrum signature;
Third step ties up one-dimensional convolutional layer by spectrum and constitutes sky-spectrum signature with the adaptive convolutional layer of space dimension two-dimensional shapes mentioning Unit is taken, spectrum is successively executed and ties up one-dimensional convolution and the adaptive convolution of space dimension two-dimensional shapes;Each feature extraction unit has two A input, respectively characteristic pattern and guiding figure;
4th step, depth network are stacked by multiple skies-spectrum signature extraction unit, and between every two feature extraction unit Skip floor connection is established, i.e., constitutes depth network using multiple feature extraction unit stacked in multi-layers, the input of each unit is front The output of all units is spliced;
5th step, building weighting cross entropy loss function.
Compared with prior art, the present invention its remarkable advantage is: (1) extracting sky-spectrum number by way of e-learning According to spatial structural form;(2) form adaptive convolution can adjust convolution acceptance region shape according to true atural object distribution dynamic, Avoid the adjacent edges pixel misclassification phenomenon as caused by the fixed geometirc structure of traditional convolution;(3) it is tieed up using spectrum one-dimensional The feature extraction unit that convolutional layer and the adaptive convolutional layer of space dimension two-dimensional shapes are constituted, can effectively extract anisotropic Sky-spectrum signature;(4) network model is end-to-end disaggregated model, and each study module unifies training and reasoning, without additional supervision Training process, has excellent extensive and classification performance.
Detailed description of the invention
Fig. 1 is the form adaptive convolution deep neural network method flow diagram that the present invention is used for classification hyperspectral imagery.
Fig. 2 is form adaptive convolution schematic diagram.
Fig. 3 is sky-spectrum signature extraction unit structure chart.
Fig. 4 is distinct methods to synthesis data set classification results figure.
Fig. 5 is distinct methods to Indian Pines data set classification results figure.
Specific embodiment
In conjunction with Fig. 1, a kind of form adaptive convolution deep neural network method for classification hyperspectral imagery, step is such as Under:
The first step takes spatial structural form to learn branch, i.e., learns EO-1 hyperion using a convolutional neural networks branch The spatial structural form of image, and be stored in the referred to as characteristic pattern of guiding figure.NoteWithRespectively Indicate the three-dimensional space-modal data and guiding figure of network inputs, wherein H, W, B, N are respectively three-dimensional space-modal data height, width, lead to The port number of road number and guiding figure.For each space coordinate p on input sky-modal data0=(x, y), guiding figure calculate are as follows:
Gj(p0)=f (Wj·X(p0)+bj)
Wherein, X (p0) indicate that in input sky-modal data spatial coordinates be p0Pixel, WjWith bjIt respectively indicates j-th One-dimensional convolution kernel and deviation, GjIndicate that j-th of wave band of export orientation figure, f () indicate softsign activation primitive.
Second step is taken based on the form adaptive convolution kernel of guiding figure and can train, that is, constructs a kind of different from tradition The form adaptive convolution of fixed position sample mode, cooperation guiding figure extract anisotropic sky-spectrum signature, as shown in Figure 2. NoteIndicate that the acceptance region in convolution algorithm, such as 3 × 3 acceptance region indicate are as follows:
For space coordinate p each on input feature vector figure0=(x, y) does not consider deviation and activation primitive, then shape is adaptive Convolution algorithm is answered to indicate are as follows:
Wherein, SiIndicate i-th can deformation convolution kernel, yiIndicate i-th of channel of output characteristic pattern, G is guiding figure.It should Can deformation convolution kernel be separated into the products of two individual cores, indicate are as follows:
Wherein,Indicate isotropism core, the core and Standard convolution nuclear phase are same;kanisIt indicates anisotropy core, calculates Are as follows:
Wherein, G is guiding figure, and σ is to adjust sensitivity parameter, ‖ ‖2Indicate L2 norm, exp () is with natural constant e For the exponential function at bottom.
Third step ties up one-dimensional convolutional layer by spectrum and constitutes sky-spectrum signature with the adaptive convolutional layer of space dimension two-dimensional shapes mentioning Unit is taken, spectrum is successively executed and ties up one-dimensional convolution and the adaptive convolution of space dimension two-dimensional shapes, as shown in Figure 3.Each feature mentions Taking unit to gather around, there are two inputs, respectively characteristic pattern and guiding figure.If the input of first of Hidden unit isIt is defeated It is outBatch normalization is then first carried out, specifically:
Wherein, E () and Var () respectively indicate mean value and variance function.Then it executes spectrum and ties up one-dimensional convolution, specifically Are as follows:
Wherein, kl|jWith bl|jRespectively indicate j-th of one-dimensional convolution kernel and deviation, T in first of feature extraction unitl|jIt indicates J-th of channel of characteristic pattern is exported, f () indicates softsign activation primitive.It is adaptive finally to execute space dimension two-dimensional shapes Convolution, and export sky-spectrum signature figure ol, specially
Wherein, sl|jWith pl|jRespectively indicate in first of feature extraction unit j-th of two dimension can deformation convolution kernel with it is corresponding partially Difference, pnEnumerate acceptance regionIn coordinate, G be guiding figure, ol|jIndicate output j-th of channel of sky-spectrum signature figure, f () table Show softsign activation primitive.
4th step, depth network are stacked by multiple skies-spectrum signature extraction unit, and every two sky-spectrum signature is extracted Skip floor connection is established between unit, i.e., constitutes depth network, the input of each unit using multiple feature extraction unit stacked in multi-layers Output for all units in front is spliced.If the input of first of Hidden unit isOutput isThen IlCalculating formula are as follows:
Il=[O1,O2,…,Ol-1]
Wherein, [...] indicates to splice multiple characteristic patterns along spectral Dimensions.
5th step, network losses function are weighting cross entropy, that is, construct the weighting for alleviating class imbalance problem and hand over Pitch entropy loss function.Remember that network inputs areIt is different classes of that sky-modal data pixel can be divided into c, then net The output of network isWherein H, W, B, C are respectively three-dimensional space-modal data height, width, port number and classification number.If Network is stacked by L (1≤L) a Hidden unit, and the output of l (1≤l≤L) a Hidden unit is Ol, then network hidden layer is defeated The characteristic pattern entered to classification layer indicates are as follows:
I=[O1,O2,…,oL]
The transformation of sky-modal data characteristic pattern to pixel generic probability data is expressed as:
Wherein, [...] indicates to splice multiple characteristic patterns along spectral Dimensions, p0=(x, y) indicates pixel in sky-spectrum Space coordinate in data, kjWith bjRespectively indicate j-th of one-dimensional convolution kernel and deviation, Yj(p0) indicate high spectrum image in p0Position The pixel set belongs to the probability of jth class.It enablesIt is expressed as the space by all training samples in high spectrum image The set of coordinate composition, L (pt) indicate sample X (pt) vectorization label,Nc(1≤c≤C) indicates the training of jth class The number of sample then weights the expression of cross entropy loss function are as follows:
Wherein, ptIt enumeratesIn all coordinate, Lc(pt) indicate vectorization label L (pt) in c-th of value, YcIndicate probability Scheme c-th of channel of Y.
The present invention has adaptive adjustment convolution acceptance region shape and retains the ability of classification scene details, is applicable to have There is the supervised classification of the high spectrum image of different resolution different scenes complexity, extensive and classification performance is excellent.
The network, can be according to explicit definition pixel by the spatial coherence between adjacent picture elements in study sky-modal data Between space structure relationship adaptively adjust the acceptance region shape of convolution algorithm, overcome fixed rectangular convolution cannot capture it is each to The defect of anisotropic feature all has excellent classification and generalization to the high spectrum image of different resolution different scenes complexity Energy.
Effect of the invention can be further illustrated by following emulation experiment.
Embodiment
High-spectrum seems typical three-dimensional space-modal data, and emulation experiment is combined into high-spectral data using one (synthesis dataset) and one group of true high-spectral data (Indian Pines).Generated data collection includes 162 spectrum Wave band, wave-length coverage are 0.4-2.5 μm, and image size is 200 × 200, comprising 5 kinds of different classes of atural objects, amount to 40000 Mark sample.Indian Pines data set is airborne visible Infrared Imaging Spectrometer (AVIRIS) in Indiana, USA The high-spectrum remote sensing of the test block Indian Pines acquisition.The image includes 220 wave bands altogether, spatial resolution 20m, Image size is 145 × 145.After removing 20 water vapor absorptions and low signal-to-noise ratio wave band (wave band number is 104-108,150-163, 220), select remaining 200 wave bands as research object.This area includes totally 10366 samples of atural object known to 16 kinds altogether.It is right Every class sample randomly selects 1% as training set in generated data collection, experiment, randomly selects 1% as verifying collection, residue 98% is used as test set.For Indian Pines data set, every class sample standard deviation takes 10% to be used as training set at random in experiment, with Machine chooses 1% as verifying collection, and remaining sample is as test set.Two groups of experiments repeat 10 times respectively and are averaged as most Eventually as a result, and referring to using OA (Overall Accuracy), AA (Average Accuracy) and Kappa coefficient as evaluation Mark.Two group data sets are without any pretreatment.In addition, control methods includes: 2D convolutional neural networks (2D-CNN) method, bilateral Road convolutional neural networks (DC-CNN) method, 3D convolutional neural networks (3D-CNN) method, multichannel convolutive neural network (MC- CNN) method, depth sky-spectrum residual error network (SSRN) method and quickly dense sky-spectrum convolution depth network (FDSSC) method.
The module of network includes 1 × 1 convolutional layer (referred to as guide layer) and 5 sky-spectrums for generating guiding figure in experiment Feature extraction unit, in which: the output channel number of guide layer is set as 3;The output characteristic pattern of 1st feature extraction unit leads to Road number is set as 128, and the port number of the output characteristic pattern of the 2-5 feature extraction unit is set as 32;In all feature extractions In unit, can deformation convolution kernel be dimensioned to 5 × 5, the initial value of sensitivity parameter σ is set as 1.In addition, the network optimization Device uses Adam optimizer, and wherein the learning rate of σ is 0.01, and the learning rate of rest parameter is 0.001, and first moment estimation index declines Lapse rate β1It is set as 0.9, second moment estimation index attenuation rate β2It is set as 0.999, ε and is set as 1e-8, the number of iterations is set as 500.Experimental situation is as follows: CPU:i7-8700K, GPU:GTX-1080Ti, memory: 32GB, Tensorflow-1.12.
Table 1 and table 2 are respectively that the method for the present invention carries out emulation in fact to synthesis data set and Indian Pines data set The nicety of grading tested.
Classification results of 1 distinct methods of table to synthesis data set
Classification results of 2 distinct methods of table to Indian Pines data set
Judging from the experimental results, this method is highly effective to synthesis data set, performance be apparently higher than including SSRN and Advanced method including FDSSC.Due to the inherent shortcoming of 2D convolution, 2D-CNN, DC-CNN, SSRN and FDSSC are in the data Smooth phenomenon was shown on collection, and the method for the present invention effectively remains former scene detailed information, obtains preferable classification Effect, it was demonstrated that the validity of the method for the present invention.For generated data collection, the classification chart such as Fig. 4 institute obtained by distinct methods Show.And for Indian Pines data set, this method still realizes optimal classification result in all comparative approach.Due to The noise that Indian Pines data set includes is more, and training set equally contains noise, and the method for the present invention can be according to training set Adjust automatically is to the holding degree of scene details, to reach optimal classification precision.For Indian Pines data set, by not The classification chart obtained with method is as shown in Figure 5.The above results show that the method for the present invention can effectively learn sky-modal data Structural information, and preferable classifying quality is reached to the holding degree of scene details according to training sample adjustment.

Claims (6)

1. a kind of form adaptive convolution deep neural network method for classification hyperspectral imagery, which is characterized in that including Following steps:
The first step using the spatial structural form of a convolutional neural networks branch study high spectrum image, and is stored in guiding Figure;
Second step, constructs form adaptive convolution, and cooperation guiding figure extracts anisotropic sky-spectrum signature;
Third step ties up one-dimensional convolutional layer by spectrum and the adaptive convolutional layer of space dimension two-dimensional shapes constitutes sky-spectrum signature and extracts list Member successively executes spectrum and ties up one-dimensional convolution and the adaptive convolution of space dimension two-dimensional shapes;There are two defeated for each feature extraction unit Enter, respectively characteristic pattern and guiding figure;
4th step, depth network is stacked by multiple skies-spectrum signature extraction unit, and is established between every two feature extraction unit Skip floor connection, i.e., constitute depth network using multiple feature extraction unit stacked in multi-layers, and the input of each unit is all for front The output of unit is spliced;
5th step, building weighting cross entropy loss function.
2. the form adaptive convolution deep neural network method according to claim 1 for classification hyperspectral imagery, It is characterized in that, the first step specifically:
It takes spatial structural form to learn branch, i.e., is tied using the space of a convolutional neural networks branch study high spectrum image Structure information, and be stored in the referred to as characteristic pattern of guiding figure;
NoteWithRespectively indicate the three-dimensional space-modal data and guiding figure of network inputs, wherein H, W, B, N is respectively the port number of three-dimensional space-modal data height, width, port number and guiding figure;For each on input sky-modal data Space coordinate p0=(x, y), guiding figure calculate are as follows:
Gj(p0)=f (Wj·X(p0)+bj)
Wherein, X (p0) indicate that in input sky-modal data spatial coordinates be p0Pixel, WjWith bjRespectively indicate j-th of one-dimensional volume Product core and deviation, GjIndicate that j-th of wave band of export orientation figure, f () indicate softsign activation primitive.
3. the form adaptive convolution deep neural network method according to claim 1 for classification hyperspectral imagery, It is characterized in that, second step specifically:
It is taken based on the form adaptive convolution kernel of guiding figure and can train, that is, construct a kind of different from the fixed position sampling of tradition The form adaptive convolution of mode, cooperation guiding figure extract anisotropic sky-spectrum signature;
NoteThe acceptance region in convolution algorithm is indicated, for space coordinate p each on input feature vector figure0=(x, y) does not consider partially Difference and activation primitive, then form adaptive convolution algorithm indicates are as follows:
Wherein, SiIndicate i-th can deformation convolution kernel, yiIndicate i-th of channel of output characteristic pattern, G is guiding figure;
This can deformation convolution kernel be separated into the products of two individual cores, indicate are as follows:
Wherein,Indicate isotropism core, the core and Standard convolution nuclear phase are same;kanisIt indicates anisotropy core, calculates are as follows:
Wherein, G is guiding figure, and σ is to adjust sensitivity parameter, ‖ ‖2Indicate L2 norm, exp () is using natural constant e the bottom of as Exponential function.
4. the form adaptive convolution deep neural network method according to claim 1 for classification hyperspectral imagery, It is characterized in that, third step, ties up one-dimensional convolutional layer by spectrum and the adaptive convolutional layer of space dimension two-dimensional shapes constitutes sky-spectrum signature Extraction unit successively executes spectrum and ties up one-dimensional convolution and the adaptive convolution of space dimension two-dimensional shapes;Each feature extraction unit is gathered around There are two input, respectively characteristic pattern and guiding is schemed, specifically:
If the input of first of Hidden unit isOutput isBatch normalization is first carried out, specifically Are as follows:
Wherein, E () and Var () respectively indicate mean value and variance function;
Then it executes spectrum and ties up one-dimensional convolution, specifically:
Wherein, kl|jWith bl|jRespectively indicate j-th of one-dimensional convolution kernel and deviation, T in first of feature extraction unitl|jIndicate output J-th of channel of characteristic pattern, f () indicate softsign activation primitive;
The adaptive convolution of space dimension two-dimensional shapes is finally executed, and exports sky-spectrum signature figure OL, specially
Wherein, Sl|jWith pl|jRespectively indicate in first of feature extraction unit j-th of two dimension can deformation convolution kernel and corresponding deviation, pn Enumerate acceptance regionIn coordinate, G be guiding figure, Ol|jIndicate output j-th of channel of sky-spectrum signature figure, f () is indicated Softsign activation primitive.
5. the form adaptive convolution deep neural network method according to claim 1 for classification hyperspectral imagery, It is characterized in that, the 4th step, depth network is stacked by multiple skies-spectrum signature extraction unit, and every two sky-spectrum signature mentions Take and establish skip floor connection between unit, i.e., constitute depth network using multiple feature extraction unit stacked in multi-layers, each unit it is defeated The output entered for all units in front is spliced, specifically:
If the input of first of Hidden unit isOutput isThen IlCalculating formula are as follows:
Il=[O1,O2,…,Ol-1]
Wherein, [...] indicates to splice multiple characteristic patterns along spectral Dimensions.
6. the form adaptive convolution deep neural network method according to claim 1 for classification hyperspectral imagery, It is characterized in that, the 5th step, network losses function is weighting cross entropy, that is, constructs the weighting for alleviating class imbalance problem Cross entropy loss function, specifically:
Remember that network inputs areSky-modal data pixel can divide that c is different classes of, then the output of network isWherein H, W, B, C are respectively three-dimensional space-modal data height, width, port number and classification number.If network is hidden by L Layer unit stacks, and the output of first of Hidden unit is Ol, 1≤l≤L, 1≤L, then network hidden layer is input to classification layer Characteristic pattern indicates are as follows:
I=[O1,O2,…,OL]
The transformation of sky-modal data characteristic pattern to pixel generic probability data is expressed as:
Wherein, [...] indicates to splice multiple characteristic patterns along spectral Dimensions, p0=(x, y) indicates pixel in sky-modal data Space coordinate, kjWith bjRespectively indicate j-th of one-dimensional convolution kernel and deviation, Yj(p0) indicate high spectrum image in p0The picture of position Member belongs to the probability of jth class;It enablesIt is expressed as the space coordinate group by all training samples in high spectrum image At set, L (pt) indicate sample X (pt) vectorization label,NcIndicate the number of jth class training sample, 1≤c ≤ C then weights the expression of cross entropy loss function are as follows:
Wherein, ptIt enumeratesIn all coordinate, Lc(pt) indicate vectorization label L (pt) in c-th of value, YcIndicate probability graph Y's C-th of channel.
CN201910709042.5A 2019-08-01 2019-08-01 Shape adaptive convolution depth neural network method for hyperspectral image classification Active CN110533077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910709042.5A CN110533077B (en) 2019-08-01 2019-08-01 Shape adaptive convolution depth neural network method for hyperspectral image classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910709042.5A CN110533077B (en) 2019-08-01 2019-08-01 Shape adaptive convolution depth neural network method for hyperspectral image classification

Publications (2)

Publication Number Publication Date
CN110533077A true CN110533077A (en) 2019-12-03
CN110533077B CN110533077B (en) 2022-09-27

Family

ID=68662064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910709042.5A Active CN110533077B (en) 2019-08-01 2019-08-01 Shape adaptive convolution depth neural network method for hyperspectral image classification

Country Status (1)

Country Link
CN (1) CN110533077B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144423A (en) * 2019-12-26 2020-05-12 哈尔滨工业大学 Hyperspectral remote sensing data multi-scale spectral feature extraction method based on one-dimensional group convolution neural network
CN111612127A (en) * 2020-04-29 2020-09-01 南京理工大学 Multi-direction information propagation convolution neural network construction method for hyperspectral image classification
CN111667019A (en) * 2020-06-23 2020-09-15 哈尔滨工业大学 Hyperspectral image classification method based on deformable separation convolution
CN111797941A (en) * 2020-07-20 2020-10-20 中国科学院长春光学精密机械与物理研究所 Image classification method and system carrying spectral information and spatial information
CN112990315A (en) * 2021-03-17 2021-06-18 北京大学 3D shape image classification method of equal-variation 3D convolution network based on partial differential operator
CN114186641A (en) * 2021-12-16 2022-03-15 长安大学 Landslide susceptibility evaluation method based on deep learning
CN114638762A (en) * 2022-03-24 2022-06-17 华南理工大学 Modularized hyperspectral image scene self-adaptive panchromatic sharpening method
CN116612356A (en) * 2023-06-02 2023-08-18 北京航空航天大学 Hyperspectral anomaly detection method based on deep learning network
CN116704241A (en) * 2023-05-22 2023-09-05 齐鲁工业大学(山东省科学院) Full-channel 3D convolutional neural network hyperspectral remote sensing image classification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845381A (en) * 2017-01-16 2017-06-13 西北工业大学 Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method
CN109376753A (en) * 2018-08-31 2019-02-22 南京理工大学 A kind of the three-dimensional space spectrum separation convolution depth network and construction method of dense connection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845381A (en) * 2017-01-16 2017-06-13 西北工业大学 Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method
CN109376753A (en) * 2018-08-31 2019-02-22 南京理工大学 A kind of the three-dimensional space spectrum separation convolution depth network and construction method of dense connection

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144423A (en) * 2019-12-26 2020-05-12 哈尔滨工业大学 Hyperspectral remote sensing data multi-scale spectral feature extraction method based on one-dimensional group convolution neural network
CN111144423B (en) * 2019-12-26 2023-05-05 哈尔滨工业大学 Hyperspectral remote sensing data multi-scale spectral feature extraction method based on one-dimensional group convolutional neural network
CN111612127B (en) * 2020-04-29 2022-09-06 南京理工大学 Multi-direction information propagation convolution neural network construction method for hyperspectral image classification
CN111612127A (en) * 2020-04-29 2020-09-01 南京理工大学 Multi-direction information propagation convolution neural network construction method for hyperspectral image classification
CN111667019A (en) * 2020-06-23 2020-09-15 哈尔滨工业大学 Hyperspectral image classification method based on deformable separation convolution
CN111797941A (en) * 2020-07-20 2020-10-20 中国科学院长春光学精密机械与物理研究所 Image classification method and system carrying spectral information and spatial information
CN112990315A (en) * 2021-03-17 2021-06-18 北京大学 3D shape image classification method of equal-variation 3D convolution network based on partial differential operator
CN112990315B (en) * 2021-03-17 2023-10-20 北京大学 3D shape image classification method of constant-variation 3D convolution network based on partial differential operator
CN114186641B (en) * 2021-12-16 2022-08-09 长安大学 Landslide susceptibility evaluation method based on deep learning
CN114186641A (en) * 2021-12-16 2022-03-15 长安大学 Landslide susceptibility evaluation method based on deep learning
CN114638762A (en) * 2022-03-24 2022-06-17 华南理工大学 Modularized hyperspectral image scene self-adaptive panchromatic sharpening method
CN114638762B (en) * 2022-03-24 2024-05-24 华南理工大学 Modularized hyperspectral image scene self-adaptive panchromatic sharpening method
CN116704241A (en) * 2023-05-22 2023-09-05 齐鲁工业大学(山东省科学院) Full-channel 3D convolutional neural network hyperspectral remote sensing image classification method
CN116612356A (en) * 2023-06-02 2023-08-18 北京航空航天大学 Hyperspectral anomaly detection method based on deep learning network
CN116612356B (en) * 2023-06-02 2023-11-03 北京航空航天大学 Hyperspectral anomaly detection method based on deep learning network

Also Published As

Publication number Publication date
CN110533077B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110533077A (en) Form adaptive convolution deep neural network method for classification hyperspectral imagery
CN108491849B (en) Hyperspectral image classification method based on three-dimensional dense connection convolution neural network
Jin et al. A survey of infrared and visual image fusion methods
CN107316013B (en) Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network)
CN103971123B (en) Hyperspectral image classification method based on linear regression Fisher discrimination dictionary learning (LRFDDL)
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN105631480B (en) The Hyperspectral data classification method folded based on multilayer convolutional network and data recombination
Bergado et al. A deep learning approach to the classification of sub-decimetre resolution aerial images
CN106845418A (en) A kind of hyperspectral image classification method based on deep learning
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN110084159A (en) Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint
CN109389080A (en) Hyperspectral image classification method based on semi-supervised WGAN-GP
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN110490849A (en) Surface Defects in Steel Plate classification method and device based on depth convolutional neural networks
CN111680579B (en) Remote sensing image classification method for self-adaptive weight multi-view measurement learning
CN108197650A (en) The high spectrum image extreme learning machine clustering method that local similarity is kept
CN104732244A (en) Wavelet transform, multi-strategy PSO (particle swarm optimization) and SVM (support vector machine) integrated based remote sensing image classification method
Xu et al. Multiscale and cross-level attention learning for hyperspectral image classification
CN106529484A (en) Combined spectrum and laser radar data classification method based on class-fixed multinucleated learning
CN108229551A (en) A kind of Classification of hyperspectral remote sensing image method based on compact dictionary rarefaction representation
CN107194423A (en) The hyperspectral image classification method of the integrated learning machine that transfinites of feature based random sampling
CN104809471B (en) A kind of high spectrum image residual error integrated classification method based on spatial spectral information
CN102063627A (en) Method for recognizing natural images and computer generated images based on multi-wavelet transform
CN104217430A (en) Image significance detection method based on L1 regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant