CN104408478A - Hyperspectral image classification method based on hierarchical sparse discriminant feature learning - Google Patents

Hyperspectral image classification method based on hierarchical sparse discriminant feature learning Download PDF

Info

Publication number
CN104408478A
CN104408478A CN201410647211.4A CN201410647211A CN104408478A CN 104408478 A CN104408478 A CN 104408478A CN 201410647211 A CN201410647211 A CN 201410647211A CN 104408478 A CN104408478 A CN 104408478A
Authority
CN
China
Prior art keywords
feature
sample
dictionary
layer
ground floor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410647211.4A
Other languages
Chinese (zh)
Other versions
CN104408478B (en
Inventor
张向荣
焦李成
梁云龙
马文萍
侯彪
刘若辰
马晶晶
白静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410647211.4A priority Critical patent/CN104408478B/en
Publication of CN104408478A publication Critical patent/CN104408478A/en
Application granted granted Critical
Publication of CN104408478B publication Critical patent/CN104408478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention particularly discloses a hyperspectral image classification method based on hierarchical sparse discriminant feature learning. The hyperspectral image classification method based on the hierarchical sparse discriminant feature learning disclosed by the invention is mainly used for solving the problem of being incapable of well learning feature representation of a hyperspectral data neighbourhood block in the prior art. The hyperspectral image classification method disclosed by the invention comprises the following realization steps of: inputting hyperspectral image data sample sets, and selecting a training set and a testing set from the hyperspectral image data sample sets; based on the selected training set and sampling set, obtaining a first-layer discriminant feature and a second-layer discriminant feature by utilizing a hierarchical discriminant feature learning method based on sparse coding; combining the first-layer discriminant feature with the second-layer discriminant feature to obtain a hierarchical discriminant feature; and, based on the hierarchical discriminant feature, classifying by utilizing a support vector machine, and outputting a classification result. On the basis of a spatial pyramid sparse coding model, discriminant dictionary leaning of classification identifier supervisory information is added; furthermore, the spatial pyramid sparse coding model is subjected to two-layer discriminant feature leaning; therefore, the feature discriminant is increased; the classification precision is increased; and thus, classification of hyperspectral data is more accurate.

Description

A kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering
Technical field
The invention belongs to technical field of image processing, relate to machine learning and Hyperspectral imagery processing, specifically a kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering, the present invention can by carrying out differentiation feature learning to high-spectral data, the appropriate feature symbolizing the different atural object of high spectrum image, thus realize on this basis computing machine autonomous Classification and Identification is carried out for the different atural object of high spectrum image.
Background technology
The terrain classification of high spectrum image is the study hotspot in current Hyperspectral imagery processing field, and its research is mainly devoted to find learn and identify different images target technical method with making computer intelligence.High spectrum image has higher spectral resolution, usually can reach 10 -2 λthe order of magnitude, wave band is many simultaneously, spectrum channel number nearly tens of more than even hundreds of, and each interchannel continuous print often.The terrain classification of high spectrum image all has applications well prospect in fields such as geologic examination, crops disaster monitoring, atmospheric pollution and military target strikes.High spectrum image terrain classification method the most general is normally: (1) inputs a panel height spectrum picture; (2) training sample and test sample book is therefrom chosen; (3) by the method for feature learning respectively to training sample and test sample book learning characteristic; (4) learned feature is classified by sorter; (5) classification results is obtained.One of them key issue is exactly how to extract useful information from a large amount of with the high-spectral data of redundancy, uses suitable feature learning method to symbolize the expression of different atural object, because the whether reasonable UPS upper performance score determining subsequent classification represented.In addition, because EO-1 hyperion has the unfavorable factors such as data volume is large, redundant information is many, wave band is many, therefore require efficient to the technical method used during high-spectral data feature learning, simple and have certain anti-noise jamming ability.
The people such as Jianchao Yang are at paper " Linear Spatial Pyramid Matching Using Sparse Coding for Image Classification " (CVPR, 2009) utilize in and carry out spatial pyramid maximum pond feature coding based on the method for Sparse Coding to original hyperspectral image data, last combining classification device is classified.The concrete steps of the method are the 1st step: extract sample SIFT feature; 2nd step: training dictionary; 3rd step: carry out coding according to dictionary to SIFT feature and obtain sparse coding vector, does to sparse coding vector the final feature that maximum pond algorithm obtains each sample; 4th step: the linear support vector machine method of final feature is classified.Although this method is relatively accurate to feature coding, the weak point still existed is, the method compares the quality depending on sparse coding.
Summary of the invention
The present invention is directed to above-mentioned the deficiencies in the prior art, propose a kind of layering based on sparse coding newly and differentiate feature learning method, class mark information is added when carrying out sparse coding to hyperspectral image data, structural information is added in the process of layered characteristic study, make terrain classification feature have more identification, thus improve the Intelligent Recognition ability to the different atural object of high-spectral data image further.
Technical scheme of the present invention is: a kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering, comprises the following steps:
(1) input comprises the high-spectrum remote sensing data of C class atural object, and each pixel is sample, sample is used spectral signature vector representation, and the intrinsic dimensionality of sample is h, all composition of sample sample sets wherein y ibe i-th sample, N is the total number of sample, and R represents real number field;
(2) from every class sample set, the sample of 10% is selected as training set at random n 1represent training set number of samples, remaining 90% sample is as test set n 2represent test set number of samples;
(3) based on training set Y trainwith sample set Y, utilize the layering based on sparse coding to differentiate feature learning method, obtain ground floor and differentiate feature set and the second layer differentiates feature set wherein, for the ground floor corresponding to sample set Y i-th sample differentiates feature, for the second layer corresponding to sample set Y i-th sample differentiates feature:
3a) random selecting K from training set 1individual training sample differentiates the initialization dictionary of dictionary as ground floor utilize and differentiate K-SVD dictionary learning method, obtain ground floor and differentiate dictionary D;
3b) differentiate dictionary D based on ground floor, utilize orthogonal matching pursuit algorithm to obtain the ground floor sparse coding feature of all samples z = [ z 1 , z 2 , . . . , z N ] ∈ R K 1 × N ;
3c) according to the ground floor sparse coding feature of all samples, utilize ground floor to differentiate feature learning method, obtain ground floor and differentiate feature set F ^ = { F ^ i } i = 1 N ∈ R 20 K 1 × N And second layer input feature vector collection D ^ i ∈ R 5 K 1 × 4 , i = 1,2 , . . . , N ;
3d) concentrate random selecting K from the second layer input feature vector that training set is corresponding 2the individual initialization dictionary D ' differentiating dictionary as the second layer 2, in conjunction with corresponding class mark matrix and discrimination matrix, be similar to ground floor and differentiate that the optimization of dictionary learning method differentiates dictionary objective function, obtain the second layer and differentiate dictionary
3e) based on second layer input feature vector collection and the second layer differentiation dictionary of sample set Y, orthogonal matching pursuit algorithm is utilized to obtain the second layer sparse coding feature of each sample i=1,2 ..., N, to the second layer sparse coding characteristic use maximum pond algorithm of all samples, obtains the second layer and differentiates feature set
(4) merge ground floor and differentiate feature set feature set is differentiated with the second layer the layering obtaining sample set Y differentiates feature set F, F = [ F ^ ; F ^ ^ ] ;
(5) training set and layering corresponding to test set are differentiated that feature set is input to supporting vector machine, obtain the tag along sort vector of test set, such label vector is the classification results of this high spectrum image.
Above-mentioned steps 3a) in differentiate that the concrete steps of K-SVD dictionary learning method are:
1st step, based on training set Y train, differentiate that the objective function of K-SVD dictionary learning method is as follows:
arg min D , W , A , X | | Y train - DX | | 2 2 + α | | Q - AX | | 2 2 + β | | H - WX | | 2 2 s . t . ∀ i , | | x i | | 1 ≤ ϵ
Wherein, above-mentioned formula Section 1 is reconstruct error term, and Section 2 is for differentiating sparse coding bound term, and Section 3 is error in classification item, and D represents that ground floor differentiates dictionary, comprises K 1individual dictionary atom, each atom dimension is d, W presentation class transformation matrix, and A represents the matrix of a linear transformation, and X represents sparse coding matrix of coefficients, represent l 2the quadratic sum of norm, α and β represents that balanced class mark differentiates the regular parameter of item and error in classification item, and span is 1 ~ 5, represent differentiation sparse coding matrix of coefficients ideally, if a kth dictionary atom and training sample set Y in D trainin i-th sample when belonging to same class, then Qki value is 1, is 0 during inhomogeneity, represent the class mark matrix of training sample, if Y trainin i-th sample belong to c (c=1,2 ..., C) and class, H cibe 1, otherwise be 0, x irepresent i-th column vector of sparse coding matrix of coefficients X, || || 1represent l 1norm, ε is 10 of definition -6;
2nd step, in order to solve the objective function differentiating K-SVD dictionary learning method, is rewritten as:
arg min D nwe , X { | | Y new - D new X | | 2 2 } s . t . ∀ i , | | x i | | 1 ≤ ϵ
Wherein, Y new = ( Y train T , α Q T , β H T ) T , D new = ( D T , α A T , β W T ) T , () tthe transposition of representing matrix, utilizes K-SVD dictionary learning method to solve to this objective function, thus obtains ground floor differentiation dictionary D.
Above-mentioned steps 3b) in the concrete steps of orthogonal matching pursuit algorithm be:
1st step, differentiate dictionary D based on ground floor, the objective optimization function of orthogonal matching pursuit algorithm is as follows:
min z i | | y i - D z i | | 2 2 s . t . | | z i | | 1 ≤ δ , i = 1,2 , . . . , N
Wherein, y irepresent i-th sample of sample set Y, z irepresent y isparse coding coefficient, δ be definition 10 -6;
2nd step, structure residual error item, residual error item is configured to r (0)=y i, i=1,2 ... N, indexed set Λ 0for K ties up null vector, initializing variable J=1;
3rd step, finds out residual error r (J-1)with the jth row d in dictionary D jthe maximum corresponding subscript λ of inner product, namely λ = arg max j = 1,2 , . . . , T 0 | ⟨ r ( J - 1 ) , d j ⟩ | ;
4th step, upgrades indexed set Λ (J), Λ (J)(J)=λ; The set D that dictionary atom row selected by renewal are formed (J)=D (:, Λ (J)(1:J)), obtain by least square method that J rank approach new residual error r (J)=y i-D (J)z i, J=J+1;
5th step, judges whether that iteration terminates: if J≤K and still have y inot as residual error item, then return the 2nd step, otherwise, if J≤K and y i, i=1,2 ... N, as residual error item then EOP (end of program), if J > is K, then turns back to the 3rd step and continues to perform.
Above-mentioned steps 3c) in ground floor differentiate that the concrete steps of feature learning method are:
1st step, with the sparse coding feature z of each sample i, i=1,2 ..., centered by N, get the sparse coding structural feature sparse coding block Z that neighborhood window size is all samples in (2m+1) × (2m+1) i, i=1,2 ..., N, Z ifor (2m+1) × (2m+1) × K 1a three-dimensional matrice;
2nd step, to the sparse coding block Z of each sample icarry out piecemeal, utilize the moving window of (m+1) × (m+1), drawing window step-length is m, from top to bottom, from left to right travels through Z i, extract sparse coding successively and represent sub-block Z i (1), Z i (2), Z i (3)and Z i (4), 4 sub-blocks altogether, the scale of each sub-block is (m+1) × (m+1) × K 1;
3rd step, carries out spatial pyramid maximum pond algorithm to 4 sub-blocks obtained successively
SM ( Z i ( j ) ) = [ M ( ( Z i ( j ) ) 1 1 ) , . . . , M ( ( Z i ( j ) ) 1 V 1 ) , . . . , M ( ( Z i ( j ) ) u V u ) , . . . , M ( ( Z i ( j ) ) U V U ) ] , j = 1,2,3,4
Wherein, SM () represents that carrying out the maximum pondization of spatial pyramid operates, u represents spatial pyramid Decomposition order, V ube the total number of all pieces being positioned at spatial pyramid u layer, M () represents maximum pond algorithm, M ( ( Z i ( j ) ) u V u ) = [ max t ∈ Z i ( j ) | z 1 t | , . . . , max t ∈ Z i ( j ) | z K 1 t | ] ;
4th step, by the mode of row matrix combination F ^ i = [ SM ( Z i ( 1 ) ) ; SM ( Z i ( 2 ) ) ; SM ( Z i ( 3 ) ) ; SM ( Z i ( 4 ) ) ] The ground floor obtaining i-th sample differentiates feature by the mode of rectangular array combination D ^ i = [ SM ( Z i ( 1 ) ) , SM ( Z i ( 2 ) ) , SM ( Z i ( 3 ) ) , SM ( Z i ( 4 ) ) ] Obtain the second layer input feature vector of i-th sample.
Beneficial effect of the present invention: the present invention inputs hyperspectral image data, the training sample of a part for random selecting is utilized initially to differentiate dictionary as one deck, through differentiating that dictionary learning obtains one deck and differentiates dictionary, the sparse coding solving the neighborhood block of each high-spectral data according to one deck differentiation dictionary obtained represents coefficient, through pyramid maximum pond method, obtain two layers and initially differentiate dictionary and one deck coding characteristic, recycle two layers and initially differentiate that dictionary is through differentiating that dictionary learning algorithm obtains two layers and differentiates dictionary, coefficient is represented according to the sparse coding that the two layers of differentiation dictionary obtained solve second layer feature coding corresponding region block, through pyramid maximum pond method, obtain two layers of coding characteristic, one deck coding characteristic is combined with two layers of coding characteristic, as the feature finally learning to obtain, this characteristic use sorter is classified, thus reach the object of EO-1 hyperion terrain classification, and achieve higher terrain classification precision.The present invention compared with prior art, has the following advantages:
First, the present invention utilizes the method differentiating dictionary learning, when ground floor dictionary learning and second layer dictionary learning, consider class mark information, overcome the deficiency that traditional K-SVD dictionary learning does not make full use of class mark information, the dictionary making the present invention learn to obtain and had more the advantage of identification by the sparse coding coefficient that this dictionary learning obtains.
The second, the present invention utilizes the method for multilayer sparse coding feature learning, overcomes tradition and uses the shortcoming that individual layer sparse coding coefficient directly carries out classifying and nicety of grading is lower, make the present invention have the high advantage of nicety of grading.
3rd, the feature learning method that the present invention utilizes empty spectral domain to combine, overcomes the deficiency not considering surrounding neighbors information of carrying out the algorithm of feature learning with a pixel, makes the present invention have the better advantage of feature robustness obtained study.
Below with reference to accompanying drawing, the present invention is described in further details.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the inventive method;
Fig. 2 is the image of Indianan Pine in emulation experiment of the present invention.
Concrete implementing measure
Below in conjunction with accompanying drawing, invention is described further.
1 concrete steps of the present invention are described below by reference to the accompanying drawings:
Step 1, input comprises the high-spectrum remote sensing data of C class atural object, and each pixel and the vector representation of sample spectral signature, the intrinsic dimensionality of sample is h, all composition of sample sample sets wherein y ibe i-th sample, N is the total number of sample, and R represents real number field;
Step 2, in this N number of sample, gets rid of background sample point, selects the sample of 10% as training set at random from every class sample set n1 represents training set number of samples, and remaining 90% sample is as test set n 2represent test set number of samples;
Step 3, based on training set and sample set, utilizes the layering based on sparse coding to differentiate feature learning method, obtains ground floor and differentiate feature set and the second layer differentiates feature set wherein, for the ground floor corresponding to sample set i-th sample differentiates feature, for i-th second layer corresponding to sample set i-th sample differentiates feature:
The first step, from all kinds of training set, a Stochastic choice part, chooses K altogether 1individual training sample differentiates the initialization dictionary of dictionary as ground floor utilize and differentiate K-SVD dictionary learning method, obtain ground floor and differentiate dictionary D, differentiate that the objective function of K-SVD dictionary learning method is as follows:
arg min D , W , A , X | | Y train - DX | | 2 2 + α | | Q - AX | | 2 2 + β | | H - WX | | 2 2 s . t . ∀ i , | | x i | | 1 ≤ ϵ
Wherein, Section 1 is reconstruct error term, and Section 2 is for differentiating sparse coding bound term, and Section 3 is error in classification item, and D represents that ground floor differentiates dictionary, comprises K 1individual dictionary atom, each atom dimension is d, W presentation class transformation matrix, and A represents the matrix of a linear transformation, and X represents sparse coding matrix of coefficients, represent l 2the quadratic sum of norm, α and β represents that balanced class mark differentiates the regular parameter of item and error in classification item, and span is 1 ~ 5, represent differentiation sparse coding matrix of coefficients ideally, if a kth dictionary atom and training sample set Y in D trainin i-th sample when belonging to same class, then Q kivalue is 1, is 0 during inhomogeneity, represent the class mark matrix of training sample, if Y trainin i-th sample belong to c (c=1,2 ..., C) and class, H cibe 1, otherwise be the i-th column vector that 0, xi represents sparse coding matrix of coefficients X, || || 1represent l 1norm, ε is 10 of definition -6;
In order to solve the objective function differentiating K-SVD dictionary learning method, be rewritten as:
arg min D nwe , X { | | Y new - D new X | | 2 2 } s . t . ∀ i , | | x i | | 1 ≤ ϵ
Wherein, Y new = ( Y train T , α Q T , β H T ) T , D new = ( D T , α A T , β W T ) T , () tthe transposition of representing matrix, utilizes K-SVD dictionary learning method to solve to this objective function, thus obtains ground floor differentiation dictionary D;
| | Y new - D new X | | 2 2 = | | Y new - Σ j = 1 L d j x j T | | = | | ( Y new - Σ j ≠ k L d j x j T ) - d k x k T | | 2 2 = | | E k - d k x k T | | 2 2
Wherein, d jrepresent D newjth row atom, represent the jth row of X, L represents D newtotal columns, d krepresent D newkth row atom, represent the row k of X, E krepresent and do not use D newkth row atom d kcarry out the error matrix that Its Sparse Decomposition produces;
Wherein K-SVD dictionary learning method is as follows:
1. pair differentiation dictionary objective function be out of shape, by Y newuse vector form E krepresent, by D newuse vector form d krepresent, X is used vector form represent, to the formula of gained after distortion be multiplied by Ω k, obtain goal decomposition formula:
| | E k Ω k - d x x k T Ω k | | 2 2 = | | E k R - d k x k R | | 2 2
Wherein distortion inaccuracy matrix represent error matrix E kdistortion, Ω ksize be P × | ω k|, P represents training sample set Y newcolumns, | ω k| represent ω kmodulus value, and Ω kat (ω k(j), j) place is 1, other places are 0, wherein 1≤j entirely≤| ω k|, ω kj () represents ω kjth number;
2. pair gained goal decomposition formula in distortion inaccuracy matrix carry out SVD decomposition to obtain wherein U represents left singular matrix, V Τrepresent right singular matrix, Δ represents singular value matrix;
3. remove more fresh target train word allusion quotation D with the first row of gained left singular matrix U newkth row atom d k;
4. repeat step 1 to step 3 couple D newin all atoms carry out update process, obtain the dictionary D that K is new 1', D 2' ... D k'.
Second step, differentiates dictionary D based on ground floor, utilizes orthogonal matching pursuit algorithm to solve following objective function, obtain the ground floor coding characteristic of all samples
min z i | | y i - D z i | | 2 2 s . t . | | z i | | 1 ≤ δ , i = 1,2 , . . . , N
Wherein, y irepresent i-th sample of sample set Y, z irepresent y isparse coding coefficient, δ be definition 10 -6, orthogonal matching pursuit algorithm is as follows:
First construct residual error item, residual error item is configured to r (0)=y i, indexed set Λ 0for K ties up null vector, initializing variable J=1;
Then circulation performs following steps 1-5
1. find out residual error r (J-1)with the jth row d in dictionary D jthe maximum corresponding subscript λ of inner product, namely λ = arg max j = 1,2 , . . . , T 0 | ⟨ r ( J - 1 ) , d j ⟩ | ;
2. upgrade indexed set Λ (J), Λ (J)(J)=λ.The set D that dictionary atom row selected by renewal are formed (J)=D (:, Λ (J)(1:J));
3. utilizing least square method to obtain, J rank approach
4. upgrade residual error r (J)=y i-D (J)z i, J=J+1;
5. judge whether that iteration terminates.If J > is K, then terminate, otherwise continue 1.
3rd step, according to the ground floor sparse coding feature of all samples, utilizes ground floor to differentiate feature learning method, obtains ground floor and differentiate feature set F ^ = { F ^ i } i = 1 N ∈ R 20 K 1 × N And second layer input feature vector collection D ^ i ∈ R 5 K 1 × 4 , i = 1,2 , . . . , N ; Wherein ground floor differentiates that feature learning method is as follows:
1. with the sparse coding feature z of each sample icentered by, getting neighborhood window size is (2m+1) × (2m+1), m=1,2 ..., the sparse coding feature construction of each sample is become sparse coding and represents block Z i, i=1,2 ..., N, namely a scale is (2m+1) × (2m+1) × K 1three-dimensional matrice;
2. the sparse coding of pair each sample represents block Z icarry out piecemeal, utilize the moving window of (m+1) × (m+1), drawing window step-length is m, and from top to bottom, the sparse coding from left to right traveling through each sample represents block, extracts sparse coding successively and represents sub-block Z i (1), Z i (2), Z i (3)and Z i (4), 4 sub-blocks altogether, the scale of each sub-block is (m+1) × (m+1) × K 1;
3. successively spatial pyramid maximum pond algorithm is carried out to 4 sub-blocks obtained
SM ( Z i ( j ) ) = [ M ( ( Z i ( j ) ) 1 1 ) , . . . , M ( ( Z i ( j ) ) 1 V 1 ) , . . . , M ( ( Z i ( j ) ) u V u ) , . . . , M ( ( Z i ( j ) ) U V U ) ] , j = 1,2,3,4
Wherein, SM () represents that carrying out the maximum pondization of spatial pyramid operates, u represents spatial pyramid Decomposition order, V ube the total number of all pieces being positioned at spatial pyramid u layer, M () represents maximum pond algorithm, M ( ( Z i ( j ) ) u V u ) = [ max t ∈ Z i ( j ) | z 1 t | , . . . , max t ∈ Z i ( j ) | z K 1 t | ] ;
4. press the mode of row matrix combination F ^ i = [ SM ( Z i ( 1 ) ) ; SM ( Z i ( 2 ) ) ; SM ( Z i ( 3 ) ) ; SM ( Z i ( 4 ) ) ] The ground floor obtaining i-th sample differentiates feature by the mode of rectangular array combination D ^ i = [ SM ( Z i ( 1 ) ) , SM ( Z i ( 2 ) ) , SM ( Z i ( 3 ) ) , SM ( Z i ( 4 ) ) ] Obtain the second layer input feature vector of i-th sample.
4th step, from training set obtain the second layer input feature vector concentrate select a part at random, altogether choose K 2the individual initialization dictionary D ' differentiating dictionary as the second layer 2, in conjunction with corresponding class mark matrix and discrimination matrix, be similar to ground floor and differentiate that dictionary construction method is by differentiating that dictionary objective function can obtain two layers and differentiate dictionary
5th step, layer 2-based input feature vector and the second layer differentiate dictionary, utilize orthogonal matching pursuit algorithm to obtain the second layer sparse coding feature of each sample wherein j=1,2,3,4 correspond to the input feature vector of i-th second layer the jth row second layer sparse coding feature obtained, to the second layer sparse coding characteristic use maximum pond algorithm of all samples, obtains the second layer and differentiates feature set
Step 4, differentiates feature set by the ground floor of all samples and the second layer differentiates feature set in conjunction with, obtain layering and differentiate feature set F
F = [ F ^ ; F ^ ^ ]
By training set and layering corresponding to test set, step 5, differentiates that feature set is input to supporting vector machine, obtain the tag along sort vector of test set, such label vector is the classification results of this high spectrum image.
Below in conjunction with accompanying drawing 2, effect of the present invention is described further.
Emulation of the present invention is that the high spectrum image Indiana Pine that obtains in June, 1992 in the northwestward, Indiana at the AVIRIS of representative NASA NASA carries out, Indiana Pine image size is 145 × 145 pixels, 220 wave bands are comprised in image, 20 wave bands removed by waters absorbs remain 200 wave bands, and this image comprises 16 class atural objects as shown in table 1 altogether.
Emulation experiment of the present invention is at AMDA4-3400APU, dominant frequency 2.69GHz, and the MATLAB2011a on internal memory 4G, Windows732 bit platform realizes.
16 class data in table 1 Indiana Pine image
Classification Item name Number of samples Training sample number
1 Clover 46 4
2 Corn-do not plough plough 1428 142
3 Corn-irrigation 830 83
4 Corn 237 23
5 Herbage 483 48
6 Trees 730 73
7 The herbage of cutting 28 2
8 Hay stockpile 478 47
9 Buckwheat 20 2
10 Soybean-do not plough plough 972 97
11 Soybean-irrigation 2455 245
12 Soya bean 593 59
13 Wheat 205 20
14 The woods 1265 126
15 Buildings-grass-tree 386 38
16 Stone-reinforcing bar 93 9
2. emulate content and analysis
The present invention and existing three kinds of methods are used to classify to high spectrum image, existing three kinds of methods are respectively: supporting vector machine SVM, based on the sorting technique SRC of rarefaction representation, based on spatial pyramid coupling sorting technique SCSPM, the wherein penalty factor of SVM method of rarefaction representation nuclear parameter determined by 5 times of cross validations, the regular terms parameter lambda of SRC method is set to 0.1, the Sparse parameter of SRC method and SCSPM method is set to 20, SCSPM method and spatial domain of the present invention scale parameter are set to 7 × 7, from 16 class data, every class gets the pixel of 10% at random as training sample, remaining 90% conduct test, carry out 5 experiments and be averaged, then the experimental precision of three kinds of methods experiment precision and this method is as shown in the table:
The existing three kinds of methods of table 2 and experimental precision result of the present invention
Method Nicety of grading
SVM 89.23%
SRC 83.70%
SCSPM 92.34%
Method of the present invention 96.54%
As can be seen from Table 2, method of the present invention shows optimum in nicety of grading, methodology acquistion of the present invention to the nicety of grading that obtains through SVM classifier of feature more direct than SVM high to the classify precision that obtains of raw data, illustrate that the feature that the present invention learns to obtain is more suitable for SVM classifier, reflect that the feature learning to obtain is effective from the side; It is more effective that method of the present invention learns through the aspect ratio SCSPM that two-layer dictionary learning and sparse coding obtain the feature that obtains, is more applicable to SVM classifier, thus describe the present invention and have obvious advantage compared with the existing methods.
To sum up, the layering that the present invention is based on sparse coding differentiates that feature learning method carries out classification hyperspectral imagery, make full use of sparse characteristic and the spatial domain contextual information of high spectrum image, can classify more accurately to original high spectrum image, with the contrast of existing three kinds of image classification methods after, describe accuracy of the present invention and validity.Compared with prior art, have the following advantages:
First, the present invention utilizes the method differentiating dictionary learning, when ground floor dictionary learning and second layer dictionary learning, consider class mark information, overcome the deficiency that traditional KSVD dictionary learning does not make full use of class mark information, the dictionary making the present invention learn to obtain and had more the advantage of identification by the sparse coding coefficient that this dictionary learning obtains.
The second, the present invention utilizes the method for multilayer sparse coding feature learning, overcomes tradition and uses the shortcoming that individual layer sparse coding coefficient directly carries out classifying and nicety of grading is lower, make the present invention have the high advantage of nicety of grading.
3rd, the feature learning method that the present invention utilizes empty spectral domain to combine, overcomes the deficiency not considering surrounding neighbors information of carrying out the algorithm of feature learning with a pixel, makes the present invention have the better advantage of feature robustness obtained study.
The part do not described in detail in present embodiment belongs to the known conventional means of the industry, does not describe one by one here.More than exemplifying is only illustrate of the present invention, does not form the restriction to protection scope of the present invention, everyly all belongs within protection scope of the present invention with the same or analogous design of the present invention.

Claims (4)

1., based on a hyperspectral image classification method for the sparse differentiation feature learning of layering, it is characterized in that, comprise the following steps:
(1) input comprises the high-spectrum remote sensing data of C class atural object, and each pixel is sample, sample is used spectral signature vector representation, and the intrinsic dimensionality of sample is h, all composition of sample sample sets wherein y ibe i-th sample, N is the total number of sample, and R represents real number field;
(2) from every class sample set, the sample of 10% is selected as training set at random n 1represent training set number of samples, remaining 90% sample is as test set n 2represent test set number of samples;
(3) based on training set Y trainwith sample set Y, utilize the layering based on sparse coding to differentiate feature learning method, obtain ground floor and differentiate feature set and the second layer differentiates feature set wherein, for the ground floor corresponding to sample set Y i-th sample differentiates feature, for the second layer corresponding to sample set Y i-th sample differentiates feature:
3a) random selecting K from training set 1individual training sample differentiates the initialization dictionary of dictionary as ground floor utilize and differentiate K-SVD dictionary learning method, obtain ground floor and differentiate dictionary D;
3b) differentiate dictionary D based on ground floor, utilize orthogonal matching pursuit algorithm to obtain the ground floor sparse coding feature of all samples z = [ z 1 , z 2 , · · · , z N ] ∈ R K 1 × N ;
3c) according to the ground floor sparse coding feature of all samples, utilize ground floor to differentiate feature learning method, obtain ground floor and differentiate feature set and second layer input feature vector collection D ^ i ∈ R 5 K 1 × 4 , i = 1,2 , · · · , N ;
3d) concentrate random selecting K from the second layer input feature vector that training set is corresponding 2the individual initialization dictionary D ' differentiating dictionary as the second layer 2, in conjunction with corresponding class mark matrix and discrimination matrix, be similar to ground floor and differentiate that the optimization of dictionary learning method differentiates dictionary objective function, obtain the second layer and differentiate dictionary
3e) based on second layer input feature vector collection and the second layer differentiation dictionary of sample set Y, orthogonal matching pursuit algorithm is utilized to obtain the second layer sparse coding feature of each sample i=1,2 ..., N, to the second layer sparse coding characteristic use maximum pond algorithm of all samples, obtains the second layer and differentiates feature set
(4) merge ground floor and differentiate feature set feature set is differentiated with the second layer the layering obtaining sample set Y differentiates feature set F,
(5) training set and layering corresponding to test set are differentiated that feature set is input to supporting vector machine, obtain the tag along sort vector of test set, such label vector is the classification results of this high spectrum image.
2. a kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering according to claim 1, is characterized in that, described step 3a) in differentiate that the concrete steps of K-SVD dictionary learning method are:
1st step, based on training set Y train, differentiate that the objective function of K-SVD dictionary learning method is as follows:
arg min D , W , A , X | | Y train - DX | | 2 2 + α | | Q - AX | | 2 2 + β | | H - WX | | 2 2
s . t . ∀ i , | | x i | | 1 ≤ ϵ
Wherein, above-mentioned formula Section 1 is reconstruct error term, and Section 2 is for differentiating sparse coding bound term, and Section 3 is error in classification item, and D represents that ground floor differentiates dictionary, comprises K 1individual dictionary atom, each atom dimension is d, W presentation class transformation matrix, and A represents the matrix of a linear transformation, and X represents sparse coding matrix of coefficients, represent l 2the quadratic sum of norm, α and β represents that balanced class mark differentiates the regular parameter of item and error in classification item, and span is 1 ~ 5, represent differentiation sparse coding matrix of coefficients ideally, if a kth dictionary atom and training sample set Y in D trainin i-th sample when belonging to same class, then Q kivalue is 1, is 0 during inhomogeneity, represent the class mark matrix of training sample, if Y trainin i-th sample belong to c (c=1,2 ..., C) and class, H cibe 1, otherwise be 0, x irepresent i-th column vector of sparse coding matrix of coefficients X, || || 1represent l 1norm, ε is 10 of definition -6;
2nd step, in order to solve the objective function differentiating K-SVD dictionary learning method, is rewritten as:
arg min D new , X { | | Y new - D new X | | 2 2 }
s . t . ∀ i , | | x i | | 1 ≤ ϵ
Wherein, Y new = ( Y train T , α Q T , β H T ) T , D new = ( D T , α A T , β W T ) T , () tthe transposition of representing matrix, utilizes K-SVD dictionary learning method to solve to this objective function, thus obtains ground floor differentiation dictionary D.
3. a kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering according to claim 1, is characterized in that, described step 3b) in the concrete steps of orthogonal matching pursuit algorithm be:
1st step, differentiate dictionary D based on ground floor, the objective optimization function of orthogonal matching pursuit algorithm is as follows:
min z i | | y i - Dz i | | 2 2 , s . t . | | z i | | 1 ≤ δ , i = 1,2 , . . . , N
Wherein, y irepresent i-th sample of sample set Y, z irepresent y isparse coding coefficient, δ be definition 10 -6;
2nd step, structure residual error item, residual error item is configured to r (0)=y i, i=1,2 ... N, indexed set Λ 0for K ties up null vector, initializing variable J=1;
3rd step, finds out residual error r (J-1)with the jth row d in dictionary D jthe maximum corresponding subscript λ of inner product, namely &lambda; = arg max j = 1,2 , . . . , T 0 | < r ( J - 1 ) , d j > |
4th step, upgrades indexed set Λ (J), Λ (J)(J)=λ; The set D that dictionary atom row selected by renewal are formed (J)=D (:, Λ (J)(1:J)), obtain by least square method that J rank approach new residual error r (J)=y i-D (J)z i, J=J+1;
5th step, judges whether that iteration terminates: if J≤K and still have y inot as residual error item, then return the 2nd step, otherwise, if J≤K and y i, i=1,2 ... N, as residual error item then EOP (end of program), if J > is K, then turns back to the 3rd step and continues to perform.
4. a kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering according to claim 1, is characterized in that, described step 3c) in ground floor differentiate that the concrete steps of feature learning method are:
1st step, with the sparse coding feature z of each sample i, i=1,2 ..., centered by N, get the sparse coding structural feature sparse coding block Z that neighborhood window size is all samples in (2m+1) × (2m+1) i, i=1,2 ..., N, Z ifor (2m+1) × (2m+1) × K 1a three-dimensional matrice;
2nd step, to the sparse coding block Z of each sample icarry out piecemeal, utilize the moving window of (m+1) × (m+1), drawing window step-length is m, from top to bottom, from left to right travels through Z i, extract sparse coding successively and represent sub-block with 4 sub-blocks altogether, the scale of each sub-block is (m+1) × (m+1) × K 1;
3rd step, carries out spatial pyramid maximum pond algorithm to 4 sub-blocks obtained successively SM ( Z i ( j ) ) = [ M ( ( Z i ( j ) ) 1 1 ) , &CenterDot; &CenterDot; &CenterDot; , M ( ( Z i ( j ) ) 1 V 1 ) , &CenterDot; &CenterDot; &CenterDot; , M ( ( Z i ( j ) ) u V u ) , &CenterDot; &CenterDot; &CenterDot; , M ( ( Z i ( j ) ) U V U ) ] , j=1,2,3,4
Wherein, SM () represents that carrying out the maximum pondization of spatial pyramid operates, j=1,2,3,4, U represent spatial pyramid Decomposition order, V ube the total number of all pieces being positioned at spatial pyramid u layer, M () represents maximum pond algorithm, M ( ( Z i ( j ) ) u V u ) = [ max t &Element; Z i ( j ) | z 1 t | , &CenterDot; &CenterDot; &CenterDot; , max t &Element; Z i ( j ) | z K 1 t | ] ;
4th step, by the mode of row matrix combination F ^ i = [ SM ( Z i ( 1 ) ) ; SM ( Z i ( 2 ) ) ; SM ( Z i ( 3 ) ) ; SM ( Z i ( 4 ) ) ] The ground floor obtaining i-th sample differentiates feature by the mode of rectangular array combination D ^ i = [ SM ( Z i ( 1 ) ) ; SM ( Z i ( 2 ) ) ; SM ( Z i ( 3 ) ) ; SM ( Z i ( 4 ) ) ] Obtain the second layer input feature vector of i-th sample.
CN201410647211.4A 2014-11-14 2014-11-14 A kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering Active CN104408478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410647211.4A CN104408478B (en) 2014-11-14 2014-11-14 A kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410647211.4A CN104408478B (en) 2014-11-14 2014-11-14 A kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering

Publications (2)

Publication Number Publication Date
CN104408478A true CN104408478A (en) 2015-03-11
CN104408478B CN104408478B (en) 2017-07-25

Family

ID=52646109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410647211.4A Active CN104408478B (en) 2014-11-14 2014-11-14 A kind of hyperspectral image classification method based on the sparse differentiation feature learning of layering

Country Status (1)

Country Link
CN (1) CN104408478B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750875A (en) * 2015-04-23 2015-07-01 苏州大学 Machine error data classification method and system
CN105160295A (en) * 2015-07-14 2015-12-16 东北大学 Rapid high-efficiency face identification method for large-scale face database
CN105740884A (en) * 2016-01-22 2016-07-06 厦门理工学院 Hyper-spectral image classification method based on singular value decomposition and neighborhood space information
CN106096571A (en) * 2016-06-22 2016-11-09 北京化工大学 A kind of based on EMD feature extraction with the cell sorting method of rarefaction representation
CN106203532A (en) * 2016-07-25 2016-12-07 北京邮电大学 Moving target based on dictionary learning and coding is across size measurement method and apparatus
CN106203523A (en) * 2016-07-17 2016-12-07 西安电子科技大学 The classification hyperspectral imagery of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN106557782A (en) * 2016-11-22 2017-04-05 青岛理工大学 Hyperspectral image classification method and device based on category dictionary
CN106570509A (en) * 2016-11-04 2017-04-19 天津大学 Dictionary learning and coding method for extracting digital image feature
CN106778808A (en) * 2016-11-09 2017-05-31 天津大学 A kind of characteristics of image learning method based on group sparse coding
CN106780387A (en) * 2016-12-22 2017-05-31 武汉理工大学 A kind of denoising method of SAR image
CN107203750A (en) * 2017-05-24 2017-09-26 中国科学院西安光学精密机械研究所 A kind of EO-1 hyperion object detection method being combined based on sparse expression and discriminant analysis
CN107251059A (en) * 2015-03-24 2017-10-13 赫尔实验室有限公司 Sparse reasoning module for deep learning
CN107358249A (en) * 2017-06-07 2017-11-17 南京师范大学 The hyperspectral image classification method of dictionary learning is differentiated based on tag compliance and Fisher
CN107909120A (en) * 2017-12-28 2018-04-13 南京理工大学 Based on alternative label K SVD and multiple dimensioned sparse hyperspectral image classification method
CN108133232A (en) * 2017-12-15 2018-06-08 南京航空航天大学 A kind of Radar High Range Resolution target identification method based on statistics dictionary learning
CN109033980A (en) * 2018-06-29 2018-12-18 华南理工大学 High spectrum image Gabor characteristic classification method based on increment part residual error least square
CN109063766A (en) * 2018-07-31 2018-12-21 湘潭大学 A kind of image classification method based on identification prediction sparse decomposition model
CN109583380A (en) * 2018-11-30 2019-04-05 广东工业大学 A kind of hyperspectral classification method based on attention constrained non-negative matrix decomposition
CN110009032A (en) * 2019-03-29 2019-07-12 江西理工大学 A kind of assembling classification method based on high light spectrum image-forming
CN110110789A (en) * 2019-05-08 2019-08-09 杭州麦迪特检测技术服务有限公司 A kind of Chinese herbal medicine quality discrimination method based on multispectral figure information fusion technology
CN110287818A (en) * 2019-06-05 2019-09-27 广州市森锐科技股份有限公司 Face feature vector optimization method based on layered vectorization
CN111709442A (en) * 2020-05-07 2020-09-25 北京工业大学 Multilayer dictionary learning method for image classification task
CN112115972A (en) * 2020-08-14 2020-12-22 河南大学 Depth separable convolution hyperspectral image classification method based on residual connection
CN112241768A (en) * 2020-11-25 2021-01-19 广东技术师范大学 Fine image classification method based on deep decomposition dictionary learning
CN112614053A (en) * 2020-12-25 2021-04-06 哈尔滨市科佳通用机电股份有限公司 Method and system for generating multiple images based on single image of antagonistic neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102891999A (en) * 2012-09-26 2013-01-23 南昌大学 Combined image compression/encryption method based on compressed sensing
US8374442B2 (en) * 2008-11-19 2013-02-12 Nec Laboratories America, Inc. Linear spatial pyramid matching using sparse coding
CN103065160A (en) * 2013-01-23 2013-04-24 西安电子科技大学 Hyperspectral image classification method based on local cooperative expression and neighbourhood information constraint
US8467610B2 (en) * 2010-10-20 2013-06-18 Eastman Kodak Company Video summarization using sparse basis function combination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8374442B2 (en) * 2008-11-19 2013-02-12 Nec Laboratories America, Inc. Linear spatial pyramid matching using sparse coding
US8467610B2 (en) * 2010-10-20 2013-06-18 Eastman Kodak Company Video summarization using sparse basis function combination
CN102891999A (en) * 2012-09-26 2013-01-23 南昌大学 Combined image compression/encryption method based on compressed sensing
CN103065160A (en) * 2013-01-23 2013-04-24 西安电子科技大学 Hyperspectral image classification method based on local cooperative expression and neighbourhood information constraint

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107251059A (en) * 2015-03-24 2017-10-13 赫尔实验室有限公司 Sparse reasoning module for deep learning
CN104750875B (en) * 2015-04-23 2018-03-02 苏州大学 A kind of machine error data classification method and system
CN104750875A (en) * 2015-04-23 2015-07-01 苏州大学 Machine error data classification method and system
CN105160295A (en) * 2015-07-14 2015-12-16 东北大学 Rapid high-efficiency face identification method for large-scale face database
CN105160295B (en) * 2015-07-14 2019-05-17 东北大学 A kind of rapidly and efficiently face retrieval method towards extensive face database
CN105740884A (en) * 2016-01-22 2016-07-06 厦门理工学院 Hyper-spectral image classification method based on singular value decomposition and neighborhood space information
CN105740884B (en) * 2016-01-22 2019-06-07 厦门理工学院 Hyperspectral Image Classification method based on singular value decomposition and neighborhood space information
CN106096571A (en) * 2016-06-22 2016-11-09 北京化工大学 A kind of based on EMD feature extraction with the cell sorting method of rarefaction representation
CN106096571B (en) * 2016-06-22 2018-11-16 北京化工大学 A kind of cell sorting method based on EMD feature extraction and rarefaction representation
CN106203523B (en) * 2016-07-17 2019-03-01 西安电子科技大学 The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN106203523A (en) * 2016-07-17 2016-12-07 西安电子科技大学 The classification hyperspectral imagery of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN106203532A (en) * 2016-07-25 2016-12-07 北京邮电大学 Moving target based on dictionary learning and coding is across size measurement method and apparatus
CN106570509B (en) * 2016-11-04 2019-09-27 天津大学 A kind of dictionary learning and coding method for extracting digital picture feature
CN106570509A (en) * 2016-11-04 2017-04-19 天津大学 Dictionary learning and coding method for extracting digital image feature
CN106778808B (en) * 2016-11-09 2020-09-08 天津大学 Image feature learning method based on group sparse coding
CN106778808A (en) * 2016-11-09 2017-05-31 天津大学 A kind of characteristics of image learning method based on group sparse coding
CN106557782A (en) * 2016-11-22 2017-04-05 青岛理工大学 Hyperspectral image classification method and device based on category dictionary
CN106780387B (en) * 2016-12-22 2020-06-02 武汉理工大学 SAR image denoising method
CN106780387A (en) * 2016-12-22 2017-05-31 武汉理工大学 A kind of denoising method of SAR image
CN107203750A (en) * 2017-05-24 2017-09-26 中国科学院西安光学精密机械研究所 A kind of EO-1 hyperion object detection method being combined based on sparse expression and discriminant analysis
CN107358249A (en) * 2017-06-07 2017-11-17 南京师范大学 The hyperspectral image classification method of dictionary learning is differentiated based on tag compliance and Fisher
CN108133232A (en) * 2017-12-15 2018-06-08 南京航空航天大学 A kind of Radar High Range Resolution target identification method based on statistics dictionary learning
CN107909120A (en) * 2017-12-28 2018-04-13 南京理工大学 Based on alternative label K SVD and multiple dimensioned sparse hyperspectral image classification method
CN109033980B (en) * 2018-06-29 2022-03-29 华南理工大学 Hyperspectral image Gabor feature classification method based on incremental local residual least square method
CN109033980A (en) * 2018-06-29 2018-12-18 华南理工大学 High spectrum image Gabor characteristic classification method based on increment part residual error least square
CN109063766A (en) * 2018-07-31 2018-12-21 湘潭大学 A kind of image classification method based on identification prediction sparse decomposition model
CN109583380A (en) * 2018-11-30 2019-04-05 广东工业大学 A kind of hyperspectral classification method based on attention constrained non-negative matrix decomposition
CN110009032A (en) * 2019-03-29 2019-07-12 江西理工大学 A kind of assembling classification method based on high light spectrum image-forming
CN110009032B (en) * 2019-03-29 2022-04-26 江西理工大学 Hyperspectral imaging-based assembly classification method
CN110110789A (en) * 2019-05-08 2019-08-09 杭州麦迪特检测技术服务有限公司 A kind of Chinese herbal medicine quality discrimination method based on multispectral figure information fusion technology
CN110287818A (en) * 2019-06-05 2019-09-27 广州市森锐科技股份有限公司 Face feature vector optimization method based on layered vectorization
CN110287818B (en) * 2019-06-05 2024-01-16 广州市森锐科技股份有限公司 Hierarchical vectorization-based face feature vector optimization method
CN111709442A (en) * 2020-05-07 2020-09-25 北京工业大学 Multilayer dictionary learning method for image classification task
CN112115972A (en) * 2020-08-14 2020-12-22 河南大学 Depth separable convolution hyperspectral image classification method based on residual connection
CN112115972B (en) * 2020-08-14 2022-11-22 河南大学 Depth separable convolution hyperspectral image classification method based on residual connection
CN112241768A (en) * 2020-11-25 2021-01-19 广东技术师范大学 Fine image classification method based on deep decomposition dictionary learning
CN112241768B (en) * 2020-11-25 2024-04-26 广东技术师范大学 Fine image classification method based on deep decomposition dictionary learning
CN112614053A (en) * 2020-12-25 2021-04-06 哈尔滨市科佳通用机电股份有限公司 Method and system for generating multiple images based on single image of antagonistic neural network

Also Published As

Publication number Publication date
CN104408478B (en) 2017-07-25

Similar Documents

Publication Publication Date Title
CN104408478A (en) Hyperspectral image classification method based on hierarchical sparse discriminant feature learning
CN110321963B (en) Hyperspectral image classification method based on fusion of multi-scale and multi-dimensional space spectrum features
CN106815601B (en) Hyperspectral image classification method based on recurrent neural network
CN103971123B (en) Hyperspectral image classification method based on linear regression Fisher discrimination dictionary learning (LRFDDL)
CN106203523B (en) The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN106529508B (en) Based on local and non local multiple features semanteme hyperspectral image classification method
CN104392251B (en) Hyperspectral image classification method based on semi-supervised dictionary learning
CN103336968B (en) Based on the high-spectral data dimension reduction method of tensor distance patch calibration
CN103208011B (en) Based on average drifting and the hyperspectral image space-spectral domain classification method organizing sparse coding
CN104298999B (en) EO-1 hyperion feature learning method based on recurrence autocoding
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN106023065A (en) Tensor hyperspectral image spectrum-space dimensionality reduction method based on deep convolutional neural network
CN109359623A (en) High spectrum image based on depth Joint Distribution adaptation network migrates classification method
CN104778482B (en) The hyperspectral image classification method that dimension about subtracts is cut based on the semi-supervised scale of tensor
CN104281855A (en) Hyperspectral image classification method based on multi-task low rank
CN103413151A (en) Hyperspectral image classification method based on image regular low-rank expression dimensionality reduction
CN105760900B (en) Hyperspectral image classification method based on neighbour&#39;s propagation clustering and sparse Multiple Kernel Learning
CN104050507B (en) Hyperspectral image classification method based on multilayer neural network
CN108446582A (en) Hyperspectral image classification method based on textural characteristics and affine propagation clustering algorithm
CN103886336A (en) Polarized SAR image classifying method based on sparse automatic encoder
CN104866871B (en) Hyperspectral image classification method based on projection structure sparse coding
CN113139512B (en) Depth network hyperspectral image classification method based on residual error and attention
CN104732244A (en) Wavelet transform, multi-strategy PSO (particle swarm optimization) and SVM (support vector machine) integrated based remote sensing image classification method
CN104809471B (en) A kind of high spectrum image residual error integrated classification method based on spatial spectral information
CN106529484A (en) Combined spectrum and laser radar data classification method based on class-fixed multinucleated learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant