CN112883852B - Hyperspectral image classification system and method - Google Patents

Hyperspectral image classification system and method Download PDF

Info

Publication number
CN112883852B
CN112883852B CN202110155143.XA CN202110155143A CN112883852B CN 112883852 B CN112883852 B CN 112883852B CN 202110155143 A CN202110155143 A CN 202110155143A CN 112883852 B CN112883852 B CN 112883852B
Authority
CN
China
Prior art keywords
image
gradient
algorithm
minimum value
structural elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110155143.XA
Other languages
Chinese (zh)
Other versions
CN112883852A (en
Inventor
曹衍龙
刘佳炜
董献瑞
杨将新
曹彦鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Industrial Technology Research Institute of ZJU
Original Assignee
Shandong Industrial Technology Research Institute of ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Industrial Technology Research Institute of ZJU filed Critical Shandong Industrial Technology Research Institute of ZJU
Priority to CN202110155143.XA priority Critical patent/CN112883852B/en
Publication of CN112883852A publication Critical patent/CN112883852A/en
Application granted granted Critical
Publication of CN112883852B publication Critical patent/CN112883852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral image classification method based on an adaptive threshold watershed algorithm and an improved support vector machine.

Description

Hyperspectral image classification system and method
Technical Field
The invention relates to hyperspectral image processing, in particular to a hyperspectral image classification system and a hyperspectral image classification method.
Background
The hyperspectral remote sensing image can simultaneously image a target area in the ultraviolet, visible light, near infrared and middle infrared ranges of an electromagnetic spectrum by tens of to hundreds of continuous and subdivided spectral bands, integrates image information and spectral information into a whole, and provides possibility for image classification.
The hyperspectral image classification method is mainly divided into two categories: pixel-based classification methods and object-oriented classification methods. The classification method based on the pixels is to extract features pixel by pixel for classification, and has high classification accuracy, but due to the pixel-by-pixel classification, the classification time is long, the efficiency is low, and the requirement of real-time property cannot be met. The object-oriented classification method firstly needs to segment images to obtain homogeneity regions, and then extracts features from each homogeneity region to perform region classification. Compared with a pixel-based classification method, the object-oriented classification method is high in classification speed and can meet the real-time requirement.
Disclosure of Invention
In view of the above defects in the prior art, the technical problem to be solved by the present invention is to overcome the defects in the prior art, that is, the pixel-based classification method has low classification efficiency and cannot meet the real-time requirement, and to provide a hyperspectral image classification method based on an adaptive threshold watershed algorithm and an improved support vector machine, that is, a homogeneous region is obtained through the adaptive threshold watershed algorithm, the region features are extracted, and then classification is performed through the improved support vector machine algorithm.
In order to achieve the above object, the present invention provides, in a first aspect, a hyperspectral image classification method, including the steps of:
(1) Inputting a hyperspectral image dataset to be classified;
(2) Performing an image pre-processing step;
(3) Obtaining a segmentation image by using a self-adaptive threshold watershed algorithm;
(4) Extracting spectral features and textural features according to the segmented image;
(5) Performing feature evaluation through a classification model, removing useless features, and obtaining optimal classification;
the classification model is obtained by determining the optimal kernel function parameter g and the penalty coefficient C of the SVM model by utilizing a training subset and a WOA-GA mixed algorithm.
Further, the step (2) specifically includes the following substeps:
(201) The wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure BDA0002933103300000021
in the formula, H isImage obtained by wave band calculation, rho is the wavelength to be calculated, rho min And ρ max Lower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρ max Corresponding band image, F is rho min Corresponding band images;
(202) Filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(203) Then, a gradient image is extracted by using a multi-scale morphological gradient extraction algorithm, wherein the definition of the multi-scale morphological gradient extraction algorithm is as follows:
Figure BDA0002933103300000022
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000023
in order to perform the operation of the dilation,
Figure BDA0002933103300000024
for erosion operation, f is the original gray image, G (f) is the gradient image, B ij Is a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, the size of the structural elements is (2i + 1) + 2i +1, and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions;
selecting structural elements in 4 directions, wherein the structural elements are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; the 4 directional structural elements with the size of 3 x 3 are respectively
Figure BDA0002933103300000025
Further, in the step (3), firstly, an adaptive threshold extraction algorithm is executed, then, an H-minima forced minimum value transformation is executed, and finally, watershed image segmentation is executed, which specifically comprises the following sub-steps:
(301) An adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
in the formula, gradmin is a local minimum value, and h is a threshold upper limit;
(302) Performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value; modifying the gradient image by using a forced minimum algorithm, setting the value of a pixel point smaller than a threshold value as the size of the threshold value through morphological corrosion expansion operation, and enabling a local minimum value to only appear at a mark position, thereby reducing a pseudo local minimum value; its definition is as follows:
Figure BDA0002933103300000026
Figure BDA0002933103300000027
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000028
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, wherein G is the gradient image, and I is the marked gradient image after forced minimum value transformation;
(303) Executing a watershed algorithm to obtain a segmentation image; the watershed algorithm is a Meyer algorithm;
(304) Executing a region merging algorithm based on a spectrum matching method, merging a small-area region and a similar region of the obtained primary segmentation region image, wherein the region similarity evaluation index is as follows:
Figure BDA0002933103300000031
in the formula, T is a new evaluation index, and X and Y are spectral vectors.
Further, the step (4) specifically includes the following substeps:
(401) Obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(402) Extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse differential moment, and taking an average value to obtain the 15-dimensional features.
Further, the step (5) comprises the following specific sub-steps:
(501) Ranking all the features by using the ranking index, and evaluating the importance of the features;
(502) Removing the features with the least scores to form a new subset;
(503) Executing the steps until only one feature exists in the subset, and searching the finally reserved feature according to the classification accuracy of the classification model;
the adopted feature sorting index is a maximum geometric interval feature sorting index, and the formula is as follows:
H c =|W 2 -W -(P)2 |
in the formula, W 2 And W -(P)2 Respectively the weight of the SVM model under the current subset and the weight after the P-th characteristic is removed, W 2 The formula is as follows:
Figure BDA0002933103300000032
wherein i and j are sample numbers, alpha i And alpha j For the solved parameters, y is the class label and K is the kernel function.
The invention provides in a second aspect a hyperspectral image classification system comprising the following modules:
the hyperspectral image data set input module is used for inputting a hyperspectral image data set to be classified;
an image preprocessing module for performing an image preprocessing step;
the image segmentation module is used for obtaining a segmented image by using an adaptive threshold watershed algorithm;
the characteristic extraction module is used for extracting spectral characteristics and texture characteristics according to the segmented image;
the classification module is used for performing characteristic evaluation through the classification model, removing useless characteristics and obtaining the optimal classification;
the classification model of the classification module is obtained by determining the optimal kernel function parameter g and the penalty coefficient C of the SVM model by utilizing a training subset and a WOA-GA mixed algorithm.
Further, the image pre-processing module is arranged to perform the steps of:
(701) The wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure BDA0002933103300000041
wherein H is the image obtained by the band calculation, rho is the wavelength to be calculated, and rho min And ρ max Lower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρ max Corresponding band image, F is rho min Corresponding band images;
(702) Filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(703) Then, a gradient image is extracted by using a multi-scale morphological gradient extraction algorithm, wherein the definition of the multi-scale morphological gradient extraction algorithm is as follows:
Figure BDA0002933103300000042
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000043
in order to perform the operation of dilation,
Figure BDA0002933103300000044
for erosion operation, f is the original gray image, G (f) is the gradient image, B ij Is a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, the size of the structural elements is (2i + 1) + 2i +1, and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions;
selecting structural elements in 4 directions, wherein the structural elements are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; the 4 directional structural elements with the size of 3 x 3 are respectively
Figure BDA0002933103300000045
Further, the image segmentation module is arranged to perform the steps of:
firstly, executing an adaptive threshold extraction algorithm, then executing H-minima forced minimum value transformation, and finally executing watershed image segmentation, wherein the method specifically comprises the following steps:
(801) An adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
in the formula, gra1min is a local minimum value, and h is a threshold upper limit;
(802) Performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value; modifying the gradient image by using a forced minimum algorithm, setting the value of a pixel point smaller than a threshold value as the size of the threshold value through morphological corrosion expansion operation, and enabling a local minimum value to only appear at a mark position, thereby reducing a pseudo local minimum value; its definition is as follows:
Figure BDA0002933103300000046
Figure BDA0002933103300000047
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000048
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, wherein G is the gradient image, and I is the marked gradient image subjected to forced minimum value transformation;
(803) Executing a watershed algorithm to obtain a segmentation image; the watershed algorithm is a Meyer algorithm;
(804) Executing a region merging algorithm based on a spectrum matching method, merging a small-area region and a similar region of the obtained primary segmentation region image, wherein the region similarity evaluation index is as follows:
Figure BDA0002933103300000051
in the formula, T is a new evaluation index, and X and Y are spectral vectors.
Further, the feature extraction module is arranged to perform the steps of:
(901) Obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(902) Extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse differential moment, and taking the mean value to obtain the 15-dimensional feature.
Further, the classification module is arranged to perform the steps of:
(1001) Ranking all the features by using the ranking index, and evaluating the importance of the features;
(1002) Eliminating the features with the least scores to form a new subset;
(1003) Executing the steps until only one feature exists in the subset, and searching the finally reserved feature according to the classification accuracy of the classification model;
the adopted feature sorting index is a maximum geometric interval feature sorting index, and the formula is as follows:
H c =|W 2 -W -(P)2 |
in the formula, W 2 And W -(P)2 Respectively the weight of the SVM model under the current subset and the weight after the P-th characteristic is removed, W 2 The formula is as follows:
Figure BDA0002933103300000052
wherein i and j are sample numbers, alpha i And alpha j For the solved parameters, y is the class label and K is the kernel function.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a flow chart of the adaptive threshold watershed algorithm in a preferred embodiment of the invention;
FIG. 2 is a flow chart of a watershed algorithm in a preferred embodiment of the invention;
FIG. 3 is a flow chart of a region merging algorithm based on a spectrum matching method in a preferred embodiment of the present invention;
FIG. 4 is a flow chart of the WOAGA hybrid algorithm in a preferred embodiment of the present invention;
FIG. 5 is a flow chart of a SVM-REF based feature selection algorithm in a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
In a specific embodiment of the hyperspectral image classification method according to the invention, the method comprises the following steps
1. Inputting a hyperspectral image dataset to be classified;
2. the image preprocessing step is executed, and the specific steps are as follows:
(1) The wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure BDA0002933103300000061
wherein H is the image obtained by the band calculation, rho is the wavelength to be calculated, and rho min And ρ max Lower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρ max Corresponding band image, F is rho min The corresponding band image.
(2) Filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(3) Then, a multi-scale morphological gradient extraction algorithm is used for extracting a gradient image, so that internal textures are reduced, a better segmentation effect is obtained, and the multi-scale morphological gradient extraction algorithm is defined as follows:
Figure BDA0002933103300000062
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000063
in order to perform the operation of dilation,
Figure BDA0002933103300000064
for erosion operation, f is the original image, G (f) is the gradient image, B ij The structural elements are a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, the size of the structural elements is (2i + 1) + (2i + 1), and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions. The invention selects structural elements in 4 directions, which are respectively 0 degree, 45 degrees, 90 degrees and 135 degrees. The 4 directional structural elements with the size of 3 x 3 are respectively
Figure BDA0002933103300000065
3. Obtaining a segmented image by using an adaptive threshold watershed algorithm, firstly executing an adaptive threshold extraction algorithm, then executing H-minima forced minimum value transformation, and finally executing watershed image segmentation, wherein a flow chart of the algorithm is shown in figure 1, and the method comprises the following specific steps:
(1) An adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
where gradmin is the local minimum and h is the upper threshold. The value of the pseudo local minimum value caused by noise and internal texture is small, while the value of the effective local minimum value is large, if the average value of all the local minimum values is used as the threshold value H, the threshold value H is large by the effective local minimum value, and the effective local minimum value is eliminated possibly when H-minima forced minimum value transformation is carried out;
(2) And performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value. And modifying the gradient image by using a forced minimum algorithm, and setting the value of the pixel point smaller than the threshold value as the threshold value through morphological corrosion expansion operation, so that the local minimum value only appears at the mark position, thereby reducing the pseudo local minimum value. Its definition is as follows:
Figure BDA0002933103300000071
Figure BDA0002933103300000072
in the formula (I), the compound is shown in the specification,
Figure BDA0002933103300000073
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, G is the gradient image, and I is the marked gradient image after forced minimum value transformation
(3) And executing a watershed algorithm to obtain a segmentation image. The watershed algorithm adopted by the invention is a Meyer algorithm, and the algorithm flow is shown in figure 2;
(4) And (3) executing a region merging algorithm based on a spectrum matching method, and merging the small-area region and the similar region of the obtained primary segmentation region image, wherein a flow chart of the algorithm is shown in fig. 3. The regional similarity evaluation indexes are as follows:
Figure BDA0002933103300000074
in the formula, T is a new evaluation index, X and Y are spectral vectors, the left half part of the formula is Euclidean distance, and the left half part of the formula is a spectral angle.
4. According to the segmented image, extracting spectral features and textural features, and specifically comprising the following steps:
(1) Obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(2) Extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse differential moment, and taking the mean value to obtain the 15-dimensional feature.
5. And performing feature evaluation based on the WOAGA-SVM and a recursive feature elimination algorithm, removing useless features and obtaining an optimal classification model. The Whale Optimization Algorithm (WOA) and the Genetic Algorithm (Genetic Algorithm, GA) are two Optimization algorithms, each of the two algorithms has advantages and disadvantages, the invention provides a WOAGA hybrid Optimization Algorithm which can better search for optimal parameters and is used for parameter Optimization of a support vector machine Gaussian kernel function parameter g and a penalty coefficient C, and the Algorithm steps are shown in figure 4. The process based on the WOAGA-SVM and the recursive feature elimination algorithm is shown in FIG. 5, and the specific steps are as follows:
(1) Finding out the optimal parameters g and C of the SVM model by using the training subset and a WOAGA algorithm, and training to obtain an optimal classification model;
(2) Ranking all the features by using the ranking index, and evaluating the importance of the features;
(3) Culling the least scoring features (least useful features) to form a new subset;
(4) And (4) executing the steps until only one feature is in the subset, and searching final reserved features according to the classification accuracy of the model.
The adopted feature sorting index is the maximum geometric interval feature sorting index, and the formula is as follows:
H c =|W 2 -W -(P)2 |;
in the formula, W 2 And W -(P)2 Respectively the weight of the SVM model under the current subset and the weight after the P-th feature is removed, W 2 The formula is as follows:
Figure BDA0002933103300000081
wherein i and j are sample numbers, alpha i And alpha j For the solved parameters, y is the class label and K is the kernel function.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (2)

1. A hyperspectral image classification method is characterized by comprising the following steps:
(1) Inputting a hyperspectral image dataset to be classified;
(2) Performing an image pre-processing step;
(3) Obtaining a segmentation image by using a self-adaptive threshold watershed algorithm;
(4) Extracting spectral features and textural features according to the segmented image;
(5) Performing feature evaluation through a classification model, removing useless features, and obtaining optimal classification;
the classification model is obtained by determining the optimal kernel function parameter g and the penalty coefficient C of the SVM model by utilizing a training subset and a WOA-GA mixed algorithm;
in the step (2), the method specifically comprises the following substeps:
(201) The wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure FDA0003702520320000011
wherein H is the image obtained by the band calculation, rho is the wavelength to be calculated, and rho min And ρ max Lower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρ max Corresponding band image, F is rho min Corresponding band images;
(202) Filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(203) Then, a gradient image is extracted by using a multi-scale morphological gradient extraction algorithm, wherein the definition of the multi-scale morphological gradient extraction algorithm is as follows:
Figure FDA0003702520320000012
in the formula (I), the compound is shown in the specification,
Figure FDA0003702520320000015
in order to perform the operation of the dilation,
Figure FDA0003702520320000016
for erosion operation, f is the original gray image, G (f) is the gradient image, B ij Is a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, the size of the structural elements is (2i + 1) + 2i +1, and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions;
selecting structural elements in 4 directions, wherein the structural elements are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; the 4 directional structural elements with the size of 3 x 3 are respectively
Figure FDA0003702520320000013
In the step (3), firstly, an adaptive threshold extraction algorithm is executed, then, H-minima forced minimum value transformation is executed, and finally, watershed image segmentation is executed, and the method specifically comprises the following substeps:
(301) An adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
in the formula, gradmin is a local minimum value, and h is a threshold upper limit;
(302) Performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value; modifying the gradient image by using a forced minimum algorithm, setting the value of a pixel point smaller than a threshold value as the size of the threshold value through morphological corrosion expansion operation, and enabling a local minimum value to only appear at a mark position, thereby reducing a pseudo local minimum value; its definition is as follows:
Figure FDA0003702520320000014
Figure FDA0003702520320000021
in the formula (I), the compound is shown in the specification,
Figure FDA0003702520320000022
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, wherein G is the gradient image, and I is the marked gradient image after forced minimum value transformation;
(303) Executing a watershed algorithm to obtain a segmentation image; the watershed algorithm is a Meyer algorithm;
(304) Executing a region merging algorithm based on a spectrum matching method, merging a small-area region and a similar region of the obtained primary segmentation region image, wherein the region similarity evaluation index is as follows:
Figure FDA0003702520320000023
in the formula, T is a new evaluation index, and X and Y are spectral vectors;
in the step (4), the method specifically comprises the following substeps:
(401) Obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(402) Extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse difference moment, and taking the mean value to obtain the 15-dimensional feature;
the step (5) comprises the following specific sub-steps:
(501) Ranking all the features by using the ranking index, and evaluating the importance of the features;
(502) Removing the features with the least scores to form a new subset;
(503) Executing the steps until only one feature exists in the subset, and searching the finally reserved feature according to the classification accuracy of the classification model;
the adopted feature sorting index is a maximum geometric interval feature sorting index, and the formula is as follows:
R c =|W 2 -W- (P)2 |;
in the formula, W 2 And W- (P)2 Respectively the weight of the SVM model under the current subset and the weight after the P-th feature is removed, W 2 The formula is as follows:
Figure FDA0003702520320000024
wherein i and j are sample numbers, alpha i And alpha j For the solved parameters, y is the class label and K is the kernel function.
2. A hyperspectral image classification system comprises the following modules:
the hyperspectral image data set input module is used for inputting a hyperspectral image data set to be classified; an image preprocessing module for performing an image preprocessing step;
the image segmentation module is used for obtaining a segmented image by using an adaptive threshold watershed algorithm;
the characteristic extraction module is used for extracting spectral characteristics and texture characteristics according to the segmented image;
the classification module is used for carrying out feature evaluation through a classification model, removing useless features and obtaining the optimal classification;
the classification model of the classification module is obtained by determining the optimal kernel function parameter g and the penalty coefficient C of the SVM model by utilizing a training subset and a WOA-GA mixed algorithm;
the image pre-processing module is arranged to perform the steps of:
(701) The wavelength of red light is 700.0nm, the wavelength of green light is 546.1nm, the wavelength of blue light is 435.8nm, the image with the corresponding wavelength is obtained through band calculation, the red light image, the green light image and the blue light image are synthesized into a visible light image and converted into a gray image f, and the band calculation formula is as follows:
Figure FDA0003702520320000031
wherein H is the image obtained by the band calculation, rho is the wavelength to be calculated, and rho min And ρ max Lower and upper limit wavelengths respectively of the closest wavelength ρ, G being ρ max Corresponding band image, F is rho min Corresponding band images;
(702) Filtering noise of a visible light image in a forest region by a bilateral filtering algorithm, and reducing a pseudo local minimum value;
(703) Then, a multi-scale morphological gradient extraction algorithm is used for extracting the gradient image, and the definition of the multi-scale morphological gradient extraction algorithm is as follows:
Figure FDA0003702520320000032
in the formula (I), the compound is shown in the specification,
Figure FDA0003702520320000038
in order to perform the operation of the dilation,
Figure FDA0003702520320000039
for erosion operation, f is the original gray image, G (f) is the gradient image, B ij The structural elements are a group of square structural elements, i (i is more than or equal to 1 and less than or equal to n) is a size factor of the structural elements, and j (j is more than or equal to 1 and less than or equal to m) is a shape factor of the structural elements and represents the structural elements in different directions, wherein the size of the structural elements is (2i + 1) and (2i + 1);
selecting structural elements in 4 directions, wherein the structural elements are 0 degree, 45 degrees, 90 degrees and 135 degrees respectively; the 4 directional structural elements with the size of 3 x 3 are respectively
Figure FDA0003702520320000033
The image segmentation module is arranged to perform the steps of:
firstly, executing an adaptive threshold extraction algorithm, then executing H-minima forced minimum value transformation, and finally executing watershed image segmentation, wherein the method specifically comprises the following steps:
(801) An adaptive threshold extraction algorithm is performed, defined as follows:
H=mean(0<gradmin≤h);
in the formula, gradmin is a local minimum value, and h is a threshold upper limit;
(802) Performing forced minimum value transformation, and marking the gradient image by using a marking threshold value H, namely reserving a local minimum value larger than the threshold value and obtaining a binary image capable of reflecting the position of the local minimum value; modifying the gradient image by using a forced minimum algorithm, setting the value of a pixel point smaller than a threshold value as the size of the threshold value through morphological corrosion expansion operation, and enabling a local minimum value to only appear at a mark position, thereby reducing a pseudo local minimum value; its definition is as follows:
Figure FDA0003702520320000034
Figure FDA0003702520320000035
in the formula (I), the compound is shown in the specification,
Figure FDA0003702520320000036
the method comprises the steps of marking a gradient image by using a threshold value H to obtain a binary image, wherein G is the gradient image, and I is the marked gradient image subjected to forced minimum value transformation;
(803) Executing a watershed algorithm to obtain a segmentation image; the watershed algorithm is a Meyer algorithm;
(804) Executing a region merging algorithm based on a spectrum matching method, merging a small-area region and a similar region of the obtained primary segmentation region image, wherein the region similarity evaluation index is as follows:
Figure FDA0003702520320000037
in the formula, T is a new evaluation index, and X and Y are spectral vectors;
the feature extraction module is arranged to perform the steps of:
(901) Obtaining the average spectral characteristics of the region by averaging the reflectance values of all pixels of the image in the same waveband in the region;
(902) Extracting texture information by utilizing a gray gradient co-occurrence matrix: extracting a gray level gradient co-occurrence matrix of each pixel point in each image of the visible light wave band, respectively calculating 15 texture features of small gradient advantage, large gradient advantage, gray level distribution nonuniformity, gradient distribution nonuniformity, energy, gray level average, gradient average, gray level variance, gradient variance, correlation, gray level entropy, gradient entropy, mixed entropy, differential moment and inverse difference moment, and taking the mean value to obtain the 15-dimensional feature;
the classification module is arranged to perform the steps of:
(1001) Ranking all the features by using the ranking index, and evaluating the importance of the features;
(1002) Removing the features with the least scores to form a new subset;
(1003) Executing the steps until only one feature exists in the subset, and searching the finally reserved feature according to the classification accuracy of the classification model;
wherein the adopted characteristic sorting index is maximumThe formula of the interval feature ordering index is as follows: r c
=|W 2 -W -(P)2 |;
In the formula, W 2 And W -(P)2 Respectively the weight of the SVM model under the current subset and the weight after the P-th feature is removed, W 2 The formula is as follows:
Figure FDA0003702520320000041
wherein i and j are sample numbers, alpha i And alpha j For the solved parameters, y is the class label and K is the kernel function.
CN202110155143.XA 2021-02-04 2021-02-04 Hyperspectral image classification system and method Active CN112883852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110155143.XA CN112883852B (en) 2021-02-04 2021-02-04 Hyperspectral image classification system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110155143.XA CN112883852B (en) 2021-02-04 2021-02-04 Hyperspectral image classification system and method

Publications (2)

Publication Number Publication Date
CN112883852A CN112883852A (en) 2021-06-01
CN112883852B true CN112883852B (en) 2022-10-28

Family

ID=76057246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110155143.XA Active CN112883852B (en) 2021-02-04 2021-02-04 Hyperspectral image classification system and method

Country Status (1)

Country Link
CN (1) CN112883852B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378979B (en) * 2021-07-02 2022-04-29 浙江大学 Hyperspectral band selection method and device based on band attention reconstruction network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510309A (en) * 2009-03-30 2009-08-19 西安电子科技大学 Segmentation method for improving water parting SAR image based on compound wavelet veins region merge
CN110472598A (en) * 2019-08-20 2019-11-19 齐鲁工业大学 SVM machine pick cotton flower based on provincial characteristics contains miscellaneous image partition method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100484479C (en) * 2005-08-26 2009-05-06 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image enhancement and spot inhibition method
CN109359653B (en) * 2018-09-12 2020-07-07 中国农业科学院农业信息研究所 Cotton leaf adhesion lesion image segmentation method and system
CN109871884B (en) * 2019-01-25 2023-03-24 曲阜师范大学 Multi-feature-fused object-oriented remote sensing image classification method of support vector machine
CN111881933B (en) * 2019-06-29 2024-04-09 浙江大学 Hyperspectral image classification method and system
CN112287886B (en) * 2020-11-19 2023-09-22 安徽农业大学 Wheat plant nitrogen content estimation method based on hyperspectral image fusion map features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510309A (en) * 2009-03-30 2009-08-19 西安电子科技大学 Segmentation method for improving water parting SAR image based on compound wavelet veins region merge
CN110472598A (en) * 2019-08-20 2019-11-19 齐鲁工业大学 SVM machine pick cotton flower based on provincial characteristics contains miscellaneous image partition method and system

Also Published As

Publication number Publication date
CN112883852A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN107610114B (en) optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
Zhang et al. Feature enhancement network: A refined scene text detector
Zhang et al. Cloud detection in high-resolution remote sensing images using multi-features of ground objects
CN104217196B (en) A kind of remote sensing image circle oil tank automatic testing method
CN111401380B (en) RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN107992856B (en) High-resolution remote sensing building shadow detection method under urban scene
CN113221881B (en) Multi-level smart phone screen defect detection method
WO2021118463A1 (en) Defect detection in image space
CN109977899B (en) Training, reasoning and new variety adding method and system for article identification
CN112883852B (en) Hyperspectral image classification system and method
Nemade et al. Co-occurrence patterns based fruit quality detection for hierarchical fruit image annotation
CN114373079A (en) Rapid and accurate ground penetrating radar target detection method
CN112070116B (en) Automatic artistic drawing classification system and method based on support vector machine
CN111368776B (en) High-resolution remote sensing image classification method based on deep ensemble learning
Singh et al. Detection of changes in Landsat Images using Hybrid PSO-FCM
Chowdhury et al. Scene text detection using sparse stroke information and MLP
Usha et al. Significance of texture features in the segmentation of remotely sensed images
Wu et al. Vehicle detection in high-resolution images using superpixel segmentation and CNN iteration strategy
CN113657196B (en) SAR image target detection method, SAR image target detection device, electronic equipment and storage medium
Thottolil et al. Automatic Building Footprint Extraction using Random Forest Algorithm from High Resolution Google Earth Images: A Feature-Based Approach
PL A study on various image processing techniques
CN111626150B (en) Commodity identification method
CN112966781A (en) Hyperspectral image classification method based on triple loss and convolutional neural network
CN113850274A (en) Image classification method based on HOG characteristics and DMD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant