CN100456319C - High spectrum image repeated selection weighing classification method - Google Patents

High spectrum image repeated selection weighing classification method Download PDF

Info

Publication number
CN100456319C
CN100456319C CNB2007101443011A CN200710144301A CN100456319C CN 100456319 C CN100456319 C CN 100456319C CN B2007101443011 A CNB2007101443011 A CN B2007101443011A CN 200710144301 A CN200710144301 A CN 200710144301A CN 100456319 C CN100456319 C CN 100456319C
Authority
CN
China
Prior art keywords
msub
mrow
weighting
math
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2007101443011A
Other languages
Chinese (zh)
Other versions
CN101127086A (en
Inventor
王立国
赵春晖
乔玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CNB2007101443011A priority Critical patent/CN100456319C/en
Publication of CN101127086A publication Critical patent/CN101127086A/en
Application granted granted Critical
Publication of CN100456319C publication Critical patent/CN100456319C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The utility model provides a classification method of hyper-spectral image reselection weighting, comprising at least one of treatment processes of sample weighting, feature weighting and category weighting; wherein, the sample weighting is completed through non-linearly mapping the distance into corresponding weight value according to the relationship between sample abnormal degree and the distance of sample deviation from class center; the feature weighting is completed through extension into classification problem of least square SVM according to the weighting characteristic of separation problem of divergence matrix in the class to linear spectrum; and the category weighting is completed through setting of different values embodying class importance according to special meaning of unit matrix diagonal elements in linear equations of least square SVM.

Description

Hyperspectral image check weighting classification method
(I) technical field
The invention relates to a hyperspectral image classification method, in particular to a hyperspectral image check (multiple) weighting classification method based on a least square support vector machine (least square SVM), and belongs to the field of pattern recognition.
(II) background of the invention
With the development of remote sensing technology, hyperspectral images (HSI) are increasingly widely used. The basic pixels in the hyperspectral images are high-dimensional data vectors, and the correct classification of the high-dimensional data vectors is the most basic and important content of the hyperspectral data analysis. Among the numerous hyperspectral image classification methods, a Support Vector Machine (SVM) is widely favored and used with its excellent classification performance. SVMs have emerged in a number of structural styles during development. Of these types of architectures, least squares SVM is popular, the main reason for which is that the cost function of least squares SVM is an optimization problem with only equality constraints, the solution of which can be done in a linear system. Although the least square SVM shows good performance in the hyperspectral image classification, how to further improve the classification performance is still a content worthy of research. In the process of hyperspectral image classification, the generalization performance of the least squares SVM is sensitive to outlier points and noise interference pixels (generally referred to as abnormal pixels) in the training process, and the outlier points and the noise interference pixels inevitably and widely exist in hyperspectral data, so that the accuracy of the model is influenced. The least square modeling method is too dependent on training samples and is sensitive to the existence of abnormal pixels, and the generalization performance of the model can be completely destroyed by introducing a small amount of abnormal pixels.
In 2002, J.A.K.SUYKENS provides a weighting method of a least square SVM, so that pixels and outliers which are seriously interfered by noise in a hyperspectral image are effectively controlled, and better robust characteristics and popularization capability are obtained. This method of weighting involves a preliminary training. It is known that the amount of computation required for a training is generally large, and especially when there are many training samples, the method becomes very time-consuming. For this reason, this method has not been effectively generalized.
The above weighting method and other existing SVM-based hyperspectral image classification weighting methods are implemented for training samples, and few documents are considered for the following two cases. Firstly, different features (or called bands, spectral bands) of the hyperspectral images have different influences on the separability of the categories, namely, the hyperspectral images have different functions on the categories, so that the hyperspectral images are not treated equally in the design of a classifier; secondly, in practical application, the remote sensing data categories are numerous, and the meaning of different categories for hyperspectral data analysis is often different, or the degree of interest of researchers to the hyperspectral data analysis is different, so that the categories also need to be considered in the design of the classifier. The traditional feature selection method can extract features which have large influence on the category separability, but the method obviously lacks flexibility and influences the analysis effect.
Disclosure of the invention
The invention aims to provide a hyperspectral image checkability weighting classification method based on a least square SVM (support vector machine) for carrying out multiple weighting processing on pixels, characteristics and categories.
The purpose of the invention is realized as follows:
the invention at least comprises one of sample weighting, feature weighting or category weighting processing;
1) the sample weighting process comprises:
is the pixel x of each training sampleiCalculate its corresponding distance D (x)i,x0) The calculation formula is as follows:
D(xi,x0)=(K(xi,xi)+K(x0,x0)-2K(xi,x0))1/21, 2, …, n. (1) wherein x0Is a sample xiThe corresponding class center (i.e. the average value of all samples of the class), and the function K (·,) is the kernel function in the SVM;
② to distance D (x)i,x0) Carrying out normalization processing, wherein a formula selected by the normalization processing is as follows:
<math> <mrow> <mi>ND</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>D</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>D</mi> <mi>max</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, D max = max i ( D ( x i , x 0 ) ) ;
normalized distance
Figure C20071014430100053
Converting into corresponding weight, wherein the conversion formula is as follows:
<math> <mrow> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mi>ND</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>D</mi> <mi>min</mi> </msub> <mo>/</mo> <msub> <mi>D</mi> <mrow> <mi>m</mi> <mi>ax</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, D min = min i ( D ( x i , x 0 ) ) ;
fourthly, optimizing error terms { e) in the expression of least square SVM classificationi}i=1 nSubstituted by its weighted form { viei}i=1 nAnd obtaining a least square SVM classification optimization expression of sample weighting:
<math> <mrow> <munder> <mi>min</mi> <mrow> <mi>w</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>e</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>,</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>w</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mi>&gamma;</mi> <mn>2</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <msub> <mi>e</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math> i=1,2,…,n,γ>0. (4)
s.t.yi=<w,φ(xi)>+b+ei
2) the characteristic weighting process comprises the following steps:
calculating various average vectors mu for all n training sample pixelsj(j=1,2,…,n):
<math> <mrow> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mi>j</mi> </msub> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>j</mi> </msub> </mrow> </munder> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein, Cj、njRespectively representing a jth sample set and the number of samples thereof;
② calculating the within-class divergence matrix SWThe formula is as follows:
<math> <mrow> <msub> <mi>S</mi> <mi>W</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>P</mi> </munderover> <munder> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>j</mi> </msub> </mrow> </munder> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
utilizing properOrthogonal matrix U and diagonalization B, real symmetric matrix SWConverted into the following form:
S W - 1 = ( UBU T ) - 1 = ( UB - 1 / 2 ) ( U B - 1 / 2 ) T - - - ( 7 )
then the matrix G ═ UB-1/2)TCan be used as a weighting matrix to perform left multiplication on all the hyperspectral data to complete wave band weighting;
3) the category weighting process comprises:
after the arrangement sequence of all n training samples is specified, different weighted values are specified for each category according to different analytical meanings;
② constructing an n multiplied by n diagonal matrix
Figure C20071014430100064
Diagonal elements such that
Figure C20071014430100065
The diagonal elements correspond to weights of classes to which corresponding sequence training samples belong;
using the constructed diagonal matrix
Figure C20071014430100066
Replacing an identity matrix I in a linear equation set corresponding to the least square SVM to obtain a new classification equation set with class weighting property:
Figure C20071014430100067
wherein y ═ y1,y2,…,yn]TFor the class attribute value corresponding to the training sample, 1v=[1,1,…,1]T(ii) a And solving the weighted equation set to obtain a classification discriminant function so as to realize the class weighted classification of the hyperspectral images.
The present invention may further comprise:
for each training sample pixel xiCalculate its corresponding distance D (x)i,x0) The selected calculation formula is as follows:
D(xi,x0)=(K(xi,xi)+K(x0,x0)-2K(xi,x0))1/2-r,i=1,2,…,n.
(1)
wherein r is x0And the minimum circle radius of the sample pixel with the specified proportion is contained as the circle center. Here, r may be set to 0 without calculation, and has only a small influence on the weighting effect.
In order to achieve the object of the present invention, when establishing a classification model: training samples with different abnormal degrees are controlled differently; the characteristics with different classification and separability are embodied differently; the data types with different analytical meanings are treated differently, and finally, a better data analysis effect is achieved.
According to the relation between the sample abnormal degree and the distance of the sample deviating from the class center, the distance is nonlinearly mapped into a corresponding weight to complete sample weighting; according to the weighting characteristic of the in-class divergence matrix to the linear spectrum separation problem, the method is popularized to the least square SVM classification problem to complete feature weighting; and according to the special meaning of the diagonal elements of the unit matrix in the least square SVM linear equation set, setting the diagonal elements into different numerical values reflecting the importance of the category to complete the category weighting.
In the above three weighting methods, sample weighting is a means applied exclusively to training samples, the second is an operation performed on all data, and class weighting is a reset of diagonal elements of a matrix during training. They can be used alone or in any combination of check ways.
(IV) description of the drawings
Fig. 1 to fig. 2 are graphs showing mapping relationships from distances to weights in a sample weighting process.
FIG. 3 is a schematic diagram of a least squares SVM check weighting operation interface.
FIGS. 3 a-3-e are comparisons of classification results for a first set of different weighting cases, where: FIG. 3-a) is a graph of the classification results using unweighted least squares SVM; FIG. 3-b) is a graph of the classification results of the least squares SVM under application of sample weighting; FIG. 3-c) is a graph of the classification results of the least squares SVM under feature weighting; FIG. 3-d) is a graph of the classification results of least squares SVM under class weighting; 3-e) is a graph of least squares SVM classification results under a combination of three weighting methods.
FIGS. 4 a-4-e are comparisons of classification results for a first set of different weighting cases, where: FIG. 4-a) is a graph of the classification results using unweighted least squares SVM; FIG. 4-b) is a graph of the classification results of the least squares SVM under application of sample weighting; FIG. 4-c) is a graph of the classification results of the least squares SVM under feature weighting; FIG. 4-d) is a graph of the classification results of least squares SVM under class weighting; 4-e) is a graph of least squares SVM classification results under a combination of three weighting methods.
(V) detailed description of the preferred embodiments
The invention will now be described in more detail by way of example with reference to the accompanying drawings in which:
the invention relates to a method for improving analysis effect when a least square SVM is applied to hyperspectral image classification, which is completed by adopting a triple weight mode capable of checking. The basic process of sample weighting is as follows: 1) calculating a distance D (x) to its corresponding class center for each training sample peli,x0) (ii) a 2) Will be at a distance D (x)i,x0) Performing normalization processing(ii) a 3) Further, the normalized distance
Figure C20071014430100082
Converting into corresponding weight; 4) finally, the least square SVM classification is optimized to obtain an error term { e ] in the expressioni}i=1 nSubstituted by its weighted form { viei}i=1 nA least squares SVM classification optimization expression of the sample weights is obtained. The feature weighting process steps are as follows: 1) calculating various types of average vectors mu for all n training sample pixelsj(j ═ 1, 2, …, n); 2) using the mean vector mujCalculating an intra-class divergence matrix SW(ii) a 3) Using intra-class divergence matrices SWAnd calculating a weighting matrix to complete the band weighting. The class weighting process steps are as follows: 1) specify all n trainsAfter the arrangement sequence of the samples, defining different weighted values for each category according to different analytical meanings; 2) constructing an n x n diagonal matrix
Figure C20071014430100083
Diagonal elements such that
Figure C20071014430100084
The diagonal elements correspond to weights of classes to which corresponding sequence training samples belong; 3) with diagonal matrices constructed
Figure C20071014430100085
And replacing the unit matrix I in the linear equation set corresponding to the least square SVM to obtain a new classification equation set with class weighting property. A detailed description of the construction of the process of the invention is given first:
1) with respect to sample weighting. The mathematical model of the least squares SVM is an optimization problem with only the sum of squares of the error cost functions constrained by equality, the solution of which can be done in a linear system. The optimization problem expression is as follows:
<math> <mrow> <munder> <mi>min</mi> <mrow> <mi>w</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>e</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>,</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>w</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mi>&gamma;</mi> <mn>2</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msubsup> <mi>e</mi> <mi>i</mi> <mn>2</mn> </msubsup> <mo>,</mo> </mrow> </math> i=1,2,…,n,γ>0.(1)
s.t.yi=<w,φ(xi)>+b+ei
in order to obtain a weighted training model of the least squares SVM, the error variables in equation (1) are weighted, i.e. eiCorresponding to the weight viThus, this formula becomes
<math> <mrow> <munder> <mi>min</mi> <mrow> <mi>w</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>e</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>,</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>w</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mi>&gamma;</mi> <mn>2</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <msub> <mi>e</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math> i=1,2,…,n,γ>0.(2)
s.t.yi=<w,φ(xi)>+b+ei
Thus, how to reasonably determine the weight viBecomes a key issue in sample weighting. Due to the fact that in the training sampleThe anomaly samples are relatively far from their respective class centers, so their degree of "anomaly" can be measured by a distance scale. Fig. 1 shows the sorting distance corresponding to a group of similar real hyperspectral data samples, and the situation of mapping the distance to the corresponding weight.
On the other hand, due to the spectral variability within a class, even a pure sample cannot be concentrated in the corresponding class center, but there is a relatively small deviation. In view of this, in calculating the distance, we can subtract a correction constant from the previously determined distance. For this purpose, the minimum radius including the sample points of the class at the predetermined ratio around the class center is determined, and the radius is set as the correction constant.
Let sample xiThe corresponding class center is x0And with x0The circle with the radius r as the center of the circle is the smallest circle containing the samples with the specified proportion. By using
Figure C20071014430100091
Represents a sample xiTo x0Uncorrected distance, then
Figure C20071014430100092
The calculation formula of (a) is as follows:
<math> <mrow> <mover> <mi>D</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mo>|</mo> <mi>&phi;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>&phi;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mi>K</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
so that x can be specifiediTo its class center x0Corrected distance D (x)i,x0) Is composed of
D ( x i , x 0 ) = D ^ ( x i , x 0 ) - r , i=1,2,…,n.(4)
Note the book
D max = max i ( D ( x i , x 0 ) ) - - - ( 5 )
D min = min i ( D ( x i , x 0 ) )
In combination withTo representThe normalized form of (a), namely:
<math> <mrow> <mi>ND</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>D</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>D</mi> <mi>max</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
the weight factor can be obtained by the following formula:
<math> <mrow> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mi>ND</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>D</mi> <mi>min</mi> </msub> <mo>/</mo> <msub> <mi>D</mi> <mi>max</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
easy to verify 0 < viLess than or equal to 1. Now will be the error term { e } in equation (1)i}i=1 nSubstituted by its weighted form { viei}i=1 nA novel weighted least squares SVM is obtained, the mathematical model being:
<math> <mrow> <munder> <mi>min</mi> <mrow> <mi>w</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>e</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>,</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>w</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mi>&gamma;</mi> <mn>2</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <msub> <mi>e</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math> i=1,2,…,n,γ>0.(8)
s.t.yi=<w,φ(xi)>+b+ei
2) with respect to feature weighting. The key to feature weighting is to find a suitable weighting matrix that can strengthen the effective features and weaken the relatively less effective features. A weighting matrix determination method based on Fisher linear discriminant analysis, a widely used pattern classification technique, is given below. With n training sample vectors for classification, mujIs the average of the jth class of samples (j ═ 1, 2, …, n), i.e.
<math> <mrow> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mi>j</mi> </msub> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>j</mi> </msub> </mrow> </munder> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
Cj、njRespectively representing the jth class sample set and the number of samples thereof, thereby defining a divergence matrix S in the classWThe following were used:
<math> <mrow> <msub> <mi>S</mi> <mi>W</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>P</mi> </munderover> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
here, the
<math> <mrow> <msub> <mi>S</mi> <mi>j</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>j</mi> </msub> </mrow> </munder> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
In Fisher Linear Spectroscopy mixture analysis, SW -1Have been successfully used as weighting matrices in the mixed-pixel spectral separation problem:
<math> <mrow> <munder> <mi>min</mi> <mi>&alpha;</mi> </munder> <mo>{</mo> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>M&alpha;</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mi>S</mi> <mi>W</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>M&alpha;</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein r is a mixed spectrum vector to be analyzed, α is a mixed proportion vector to be solved, and M is a spectrum matrix with spectrum end members as column vectors. From the analysis, SWIs a real symmetric matrix, so that there is an orthogonal matrix U that diagonalizes it as a matrix B:
UTSWU=B (13)
further, it can be inferred that:
S W - 1 = ( UBU T ) - 1 = ( UB - 1 / 2 ) ( UB - 1 / 2 ) T - - - ( 14 )
remember G ═ G (UB)-1/2)TThen (12) is converted into
<math> <mrow> <munder> <mi>min</mi> <mi>&alpha;</mi> </munder> <mo>{</mo> <msup> <mrow> <mo>(</mo> <mi>Gr</mi> <mo>-</mo> <mi>GM&alpha;</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>Gr</mi> <mo>-</mo> <mi>GM&alpha;</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow> </math>
Comparing the original spectral separation problem:
<math> <mrow> <munder> <mi>min</mi> <mi>&alpha;</mi> </munder> <mo>{</mo> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>M&alpha;</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>M&alpha;</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow> </math>
it is understood that formula (9) corresponds to the conversion of G ═ UB-1/2)TUsed as a sample weighting matrix. It is known from the relationship between spectral separation and the data classification problem (spectral separation into more accurate soft classifications) that G can be used as a sample weighting matrix in the classification problem.
3) With respect to category weighting. In the least squares SVM algorithm, the corresponding dual problem is:
<math> <mrow> <mrow> <munder> <mi>min</mi> <mrow> <mi>w</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>e</mi> <mo>,</mo> <mi>&alpha;</mi> </mrow> </munder> <mi>L</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>e</mi> <mo>,</mo> <mi>&alpha;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>J</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>,</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>{</mo> <mo>&lt;</mo> <mi>w</mi> <mo>,</mo> <mi>&phi;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>></mo> <mo>+</mo> <mi>b</mi> <mo>+</mo> <msub> <mi>e</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow> </math>
the optimal KKT condition is as follows:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>L</mi> </mrow> <mrow> <mo>&PartialD;</mo> <mi>w</mi> </mrow> </mfrac> <mo>=</mo> <mn>0</mn> <mo>&RightArrow;</mo> <mi>w</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>&alpha;</mi> <mi>k</mi> </msub> <mi>&phi;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>L</mi> </mrow> <mrow> <mo>&PartialD;</mo> <mi>b</mi> </mrow> </mfrac> <mo>=</mo> <mn>0</mn> <mo>&RightArrow;</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>L</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>e</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mn>0</mn> <mo>&RightArrow;</mo> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>&gamma;e</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>L</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mn>0</mn> <mo>&RightArrow;</mo> <mo>&lt;</mo> <mi>w</mi> <mo>,</mo> <mi>&phi;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>></mo> <mo>+</mo> <mi>b</mi> <mo>+</mo> <msub> <mi>e</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow> </math>
the above formula can be further represented by the elimination of w and e by the elimination method:
<math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msubsup> <mn>1</mn> <mi>v</mi> <mi>T</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <msub> <mn>1</mn> <mi>v</mi> </msub> </mtd> <mtd> <mi>K</mi> <mo>+</mo> <mi>I</mi> <mo>/</mo> <mi>&gamma;</mi> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>b</mi> </mtd> </mtr> <mtr> <mtd> <mi>&alpha;</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein y ═ y1,y2,…,yn]T,1v=[1,1,…,1]T,α=[α1,α2,…,αn]TK is a kernel function matrix of the training samples, I is an n x n identity matrix, and n is the number of the training samples. When I is an identity matrix, the representation is trainedThe routine considers equally for each training sample. The category weighting means that the importance degree of each category is changed by changing the diagonal element value of the corresponding position of some category samples in the I instead of the original 1, so that the interested categories are protected, and the non-important categories are restrained.
According to theoretical analysis and simulation tests, the diagonal element value of the I can reflect the attention degree of each training sample. The larger a certain weight of I is, the less importance of the training process on the corresponding sample is, and vice versa. Therefore, the weight corresponding to the training sample of the interested category is properly reduced, and the weight corresponding to the training sample of the non-interested category is properly increased, so that the classification precision of the interested category can be effectively improved.
FIG. 2 is a schematic diagram of an operation interface of the least squares SVM check weighting.
According to the method, the performance of the real remote sensing image is evaluated. The remote sensing image is taken from a part of Indian agriculture and forestry hyperspectral remote sensing test area in northwest of Indiana in 1992, and the image size is 144 multiplied by 144, 220 spectral range (characteristic). Raw surveillance data defines the image into a total of 16 terrain categories, in addition to background.
The first group of experimental samples is formed by combining data of three types of ground objects, namely 3, 8 and 11 in the image. And (4) extracting spectral characteristics of part of pixels as training samples, and using the whole data as test samples. The effect test is performed by sequentially adopting unweighted, sample weighted, feature weighted and category weighted modes (three categories sequentially correspond to weights 1, 5 and 10), and triple weighted modes, and the classification results are sequentially shown in fig. 2a) to e). In the classification result, the above three categories are labeled as blue, pink and green in turn. Pixels with wrong classification in the image are displayed by white dots. Experimental results show that the overall classification precision can be improved to different degrees by using the sample weighting method and the feature weighting method, the category weighting method can improve the analysis effect of the category corresponding to the relatively small weight (simultaneously reduce the analysis effect of the category corresponding to the relatively large weight), and the three weighting methods can achieve better analysis effect on the whole by using the three weighting methods simultaneously.
In the second set of experiments, three types of ground objects 2, 10 and 6 were selected, and the experimental method was the same as above, and the classification results are shown in fig. 3. The objective evaluation indexes of the above two experiments are shown in tables 1 and 2, respectively.
TABLE 1 number of misclassified samples in the first group of surface feature classifications
Figure C20071014430100121
TABLE 2 number of misclassified samples in the second set of ground object classifications
Figure C20071014430100122

Claims (3)

1. A hyperspectral image checkability weighting classification method at least comprises one of sample weighting, feature weighting or category weighting processing procedures, and is characterized in that:
1) the sample weighting process comprises:
is the pixel x of each training sampleiCalculate its corresponding distance D (x)i,x0) The calculation formula is as follows:
D(xi,x0)=(K(xi,xi)+K(x0,x0)-2K(xi,x0))1/2,i=1,2,…,n.
wherein x is0Is a sample xiCorresponding class center, wherein the function K (·,) is a kernel function in the SVM;
② to distance D (x)i,x0) Carrying out normalization processing, wherein a formula selected by the normalization processing is as follows:
<math> <mrow> <mi>ND</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>D</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>D</mi> <mi>max</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo></mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> <mo>.</mo> </mrow> </math>
wherein, D max = max i ( D ( x i , x 0 ) ) ;
normalized distance
Figure C2007101443010002C3
Converting into corresponding weight, wherein the conversion formula is as follows:
<math> <mrow> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mi>ND</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <msub> <mi>y</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>D</mi> <mi>min</mi> </msub> <mo>/</mo> <msub> <mi>D</mi> <mi>max</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> <mo>.</mo> </mrow> </math>
wherein, D min = min i ( D ( x i , x 0 ) ) ;
fourthly, optimizing error terms { e) in the expression of least square SVM classificationi}i=1 nSubstituted by its weighted form { viei}i=1 nAnd obtaining a least square SVM classification optimization expression of sample weighting:
<math> <mrow> <munder> <mi>min</mi> <mrow> <mi>w</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>e</mi> </mrow> </munder> <mi>J</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>,</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mi>w</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mfrac> <mi>&gamma;</mi> <mn>2</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <msub> <mi>e</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>n</mi> <mo>,</mo> <mi>&gamma;</mi> <mo>></mo> <mn>0</mn> <mo>.</mo> </mrow> </math>
s.t. yi=<w,φ(xi)>+b+ei
2) the characteristic weighting process comprises the following steps:
calculating various average vectors mu for all n training sample pixelsj,j=1,2,…,n:
<math> <mrow> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mi>j</mi> </msub> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>j</mi> </msub> </mrow> </munder> <msub> <mi>r</mi> <mi>i</mi> </msub> </mrow> </math>
Wherein, cj、njRespectively representing a jth sample set and the number of samples thereof;
② calculating the within-class divergence matrix SWThe formula is as follows:
<math> <mrow> <msub> <mi>S</mi> <mi>W</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>P</mi> </munderover> <munder> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>j</mi> </msub> </mrow> </munder> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow> </math>
③ using appropriate orthogonal matrix U and diagonalization B, the real symmetric matrix SWConverted into the following form:
S W - 1 = ( UBU T ) - 1 = ( UB - 1 / 2 ) ( UB - 1 / 2 ) T
then the matrix G ═ UB-1/2)TCan be used as a weighting matrix to perform left multiplication on all the hyperspectral data to complete wave band weighting;
3) the category weighting process comprises:
after the arrangement sequence of all n training samples is specified, different weighted values are specified for each category according to different analytical meanings;
② constructing an n multiplied by n diagonal matrix
Figure C2007101443010003C3
Diagonal elements such thatThe diagonal elements correspond to weights of classes to which corresponding sequence training samples belong;
using the constructed diagonal matrixReplacing an identity matrix I in a linear equation set corresponding to the least square SVM to obtain a new classification equation set with class weighting property:
Figure C2007101443010003C6
wherein y ═ y1,y2,…,yn]TIs the class attribute value corresponding to the training sample, lv=[l,l,…,l]T(ii) a And solving the weighted equation set to obtain a classification discriminant function so as to realize the class weighted classification of the hyperspectral images.
2. The checking weighted classification method of spectral images according to claim 1, wherein:
for each training sample pixel xiCalculate its corresponding distance D (x)i,x0) The selected calculation formula is as follows:
D(xi,x0)=(K(xi,xi)+K(x0,x0)-2K(xi,x0))1/2-r,i=1,2,…,n.
wherein r is x0And the minimum circle radius of the sample pixel with the specified proportion is contained as the circle center.
3. The spectral image checkability weighting classification method according to claim 1 or 2, characterized by: it is a combination of sample weighting, feature weighting and class weighting processes.
CNB2007101443011A 2007-09-12 2007-09-12 High spectrum image repeated selection weighing classification method Expired - Fee Related CN100456319C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007101443011A CN100456319C (en) 2007-09-12 2007-09-12 High spectrum image repeated selection weighing classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007101443011A CN100456319C (en) 2007-09-12 2007-09-12 High spectrum image repeated selection weighing classification method

Publications (2)

Publication Number Publication Date
CN101127086A CN101127086A (en) 2008-02-20
CN100456319C true CN100456319C (en) 2009-01-28

Family

ID=39095111

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007101443011A Expired - Fee Related CN100456319C (en) 2007-09-12 2007-09-12 High spectrum image repeated selection weighing classification method

Country Status (1)

Country Link
CN (1) CN100456319C (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102313897A (en) * 2010-06-29 2012-01-11 成都理工大学 Radioactive spectrum identification method
CN102074008B (en) * 2011-01-05 2013-02-06 哈尔滨工程大学 Fully-constrained least square linear spectrum hybrid analysis method of hyperspectral image
CN102542285B (en) * 2011-08-03 2014-01-08 清华大学 Image collection scene sorting method and image collection scene sorting device based on spectrogram analysis
CN102750551A (en) * 2012-06-18 2012-10-24 杭州电子科技大学 Hyperspectral remote sensing classification method based on support vector machine under particle optimization
CN103325096B (en) * 2013-06-25 2016-04-13 中国科学院遥感与数字地球研究所 Based on the wide cut high spectrum image reconstructing method that many/high spectrum image merges
CN104463848A (en) * 2014-11-07 2015-03-25 哈尔滨工程大学 Real-time hyperspectral image detection method with local causality window
CN106844596A (en) * 2017-01-13 2017-06-13 厦门天锐科技股份有限公司 One kind is based on improved SVM Chinese Text Categorizations
CN116843998B (en) * 2023-08-29 2023-11-14 四川省分析测试服务中心 Spectrum sample weighting method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1472634A (en) * 2003-05-16 2004-02-04 上海交通大学 High spectrum remote sensing image combined weighting random sorting method
US20060188161A1 (en) * 2005-02-22 2006-08-24 John Gruninger Process for finding endmembers in a data set
US20060251324A1 (en) * 2004-09-20 2006-11-09 Bachmann Charles M Method for image data processing
CN1932850A (en) * 2006-10-09 2007-03-21 武汉大学 Remoto sensing image space shape characteristics extracting and sorting method
CN101030299A (en) * 2007-03-29 2007-09-05 复旦大学 Method for decomposing remote-sensing-mixed image element based on data space orthogonality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1472634A (en) * 2003-05-16 2004-02-04 上海交通大学 High spectrum remote sensing image combined weighting random sorting method
US20060251324A1 (en) * 2004-09-20 2006-11-09 Bachmann Charles M Method for image data processing
US20060188161A1 (en) * 2005-02-22 2006-08-24 John Gruninger Process for finding endmembers in a data set
CN1932850A (en) * 2006-10-09 2007-03-21 武汉大学 Remoto sensing image space shape characteristics extracting and sorting method
CN101030299A (en) * 2007-03-29 2007-09-05 复旦大学 Method for decomposing remote-sensing-mixed image element based on data space orthogonality

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于BP网络和决策融合的高光谱遥感图像分类方法. 于君娜等.应用科技,第34卷第1期. 2007 *
基于模糊支持向量机的超光谱遥感图像分类. 郭春燕等.应用科技,第34卷第3期. 2007 *

Also Published As

Publication number Publication date
CN101127086A (en) 2008-02-20

Similar Documents

Publication Publication Date Title
CN100456319C (en) High spectrum image repeated selection weighing classification method
CN109145992A (en) Cooperation generates confrontation network and sky composes united hyperspectral image classification method
CN109389080B (en) Hyperspectral image classification method based on semi-supervised WGAN-GP
CN102819745B (en) Hyper-spectral remote sensing image classifying method based on AdaBoost
CN104374738B (en) A kind of method for qualitative analysis improving identification result based on near-infrared
CN109740631B (en) OBIA-SVM-CNN remote sensing image classification method based on object
CN112070078B (en) Deep learning-based land utilization classification method and system
CN111310598B (en) Hyperspectral remote sensing image classification method based on 3-dimensional and 2-dimensional mixed convolution
CN112560967B (en) Multi-source remote sensing image classification method, storage medium and computing device
CN104866871B (en) Hyperspectral image classification method based on projection structure sparse coding
CN107491793B (en) Polarized SAR image classification method based on sparse scattering complete convolution
CN104809471B (en) A kind of high spectrum image residual error integrated classification method based on spatial spectral information
CN108734199A (en) High spectrum image robust classification method based on segmentation depth characteristic and low-rank representation
CN110555843B (en) High-precision reference-free fusion remote sensing image quality analysis method and system
CN104050507A (en) Hyper spectrum image classification method based on multilayer neural network
CN107844751A (en) The sorting technique of guiding filtering length Memory Neural Networks high-spectrum remote sensing
CN105913092A (en) Figure regular hyperspectral image band selection method based on subspace learning
CN114818985B (en) Tea quality evaluation method based on central anchor point triplet optimization pseudo-twin network
CN107292258A (en) High spectrum image low-rank representation clustering method with filtering is modulated based on bilateral weighted
CN116385819A (en) Water quality evaluation method, device and equipment based on neural network model
CN114266961A (en) Method for integrating, learning and classifying marsh vegetation stacks by integrating hyperspectral and multiband fully-polarized SAR images
CN107067407A (en) Profile testing method based on non-classical receptive field and linear non-linear modulation
CN103870807A (en) High spectrum mixed nuclear RX anomaly detection method
CN112257531B (en) Remote sensing monitoring method for forest land change based on diversity feature combination
CN104050482B (en) A kind of manifold learning generalization algorithm based on local linear smoothing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090128

Termination date: 20110912