WO2019047248A1 - Feature extraction method and device for hyperspectral remotely sensed image - Google Patents

Feature extraction method and device for hyperspectral remotely sensed image Download PDF

Info

Publication number
WO2019047248A1
WO2019047248A1 PCT/CN2017/101353 CN2017101353W WO2019047248A1 WO 2019047248 A1 WO2019047248 A1 WO 2019047248A1 CN 2017101353 W CN2017101353 W CN 2017101353W WO 2019047248 A1 WO2019047248 A1 WO 2019047248A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
gradient
normalized image
feature extraction
feature
Prior art date
Application number
PCT/CN2017/101353
Other languages
French (fr)
Chinese (zh)
Inventor
贾森
吴奎霖
朱家松
邓琳
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2019047248A1 publication Critical patent/WO2019047248A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns

Definitions

  • the present application relates to the field of image processing, and in particular to a method and apparatus for classifying hyperspectral remote sensing images.
  • Hyperspectral remote sensing image refers to the hyperspectral image data obtained by imaging the sensor in different wavelengths in the visible, near-infrared, mid-infrared and thermal infrared bands of the electromagnetic spectrum. Therefore, hyperspectral remote sensing images contain a wealth of spatial, radiation and spectral triple information, which provides a possibility for the fine classification and identification of surface materials.
  • the hyperspectral remote sensing image formed on hundreds of bands contains the triple information of radiation, space and spectrum, which makes the identification and classification of features more effective.
  • feature extraction is first required.
  • the existing feature extraction methods mainly include two categories: spatial feature extraction and spatial spectral feature extraction.
  • Spatial feature extraction mainly uses spatial information of different bands to express hyperspectral remote sensing images. It first extracts the spatial features of each band and then overlays the spatial features of the different bands.
  • Gabor and Local Binary Pattern are two typical spatial feature extraction methods. Gabor features are robust to illumination in images, and LBP can make full use of local spatial dependencies in images.
  • researchers have been inspired by Gabor features and LBP feature extraction methods, and proposed Gabor Surface Feature (GSF) extraction methods. Specifically, the Gabor amplitude features are first extracted from the two-dimensional image, and then the images are characterized by the first and second derivatives of these Gabor amplitude features.
  • GSF Gabor Surface Feature
  • hyperspectral remote sensing images there are two main disadvantages in the GSF extraction method. First, it only characterizes the spatial relationship of two-dimensional images, resulting in the three-dimensional spatial spectrum structure of hyperspectral remote sensing images not fully exploited. Second, hyperspectral The remote sensing image contains hundreds of bands. The multi-scale analysis and histogram feature representation of GSF greatly expands the feature dimension, which reduces the classification performance and increases the complexity of time and space.
  • the embodiment of the present application provides a feature extraction method and device for hyperspectral remote sensing images, which can fully utilize the three-dimensional spatial spectrum structure of hyperspectral remote sensing images, improve classification performance, and reduce time and space complexity.
  • an embodiment of the present application provides a feature extraction method for a hyperspectral remote sensing image, where the method includes:
  • the histogram feature extraction is performed on the encoded feature map to obtain 3D surface surface features (3DSF).
  • the embodiment of the present application provides a feature extraction device for a hyperspectral remote sensing image, where the device includes:
  • a normalization processing unit for normalizing the original image H to obtain a normalized image R;
  • a calculating unit configured to calculate a gradient of the normalized image R according to a preset gradient template
  • a coding unit configured to perform coding according to the gradient of the normalized image R to obtain a coded feature map
  • a feature extraction unit configured to perform histogram feature extraction on the encoded feature map to obtain a three-dimensional surface feature 3DSF.
  • the feature extraction device normalizes the original image to obtain a normalized image; secondly, the feature extraction device calculates a normalization according to the preset gradient template. The gradient value of the image; again, the feature extraction device binarizes the normalized image and the gradient thereof to obtain binarized data; finally, the feature extraction device encodes the binarized data and encodes the data according to the encoding
  • the result is a 3DSF feature.
  • FIG. 1 is a schematic flowchart of a method for extracting features of a hyperspectral remote sensing image according to an embodiment of the present application
  • FIG. 2 is a schematic flow chart of another feature extraction method for hyperspectral remote sensing images according to an embodiment of the present application
  • Figure 3 is a schematic diagram of a three-dimensional coordinate system
  • 4a is a schematic diagram of pixel points of a band image
  • Figure 4b is a schematic diagram of pixel points of the processed band image
  • 4c is a schematic diagram of pixel point gradient values of a band image
  • FIG. 5 is a schematic structural diagram of a feature extraction apparatus for a hyperspectral remote sensing image according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of another feature extraction apparatus for hyperspectral remote sensing images according to an embodiment of the present application.
  • the embodiment of the present application provides a method and a device for extracting features of a hyperspectral remote sensing image, which can fully utilize the three-dimensional spatial spectrum structure of the hyperspectral remote sensing image, improve the classification performance, and reduce the complexity of time and space.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a method for extracting features of a hyperspectral remote sensing image according to an embodiment of the present application.
  • the method for extracting features of a hyperspectral remote sensing image provided by an embodiment of the present application includes the following steps:
  • the normalizing the original image H to obtain the normalized image R includes:
  • the original image H is normalized according to the mean and variance of the pixels of the original image H to obtain a normalized image R.
  • the feature extraction device divides the original image H into M band images according to different bands in the spectral dimension.
  • the feature extraction device calculates the mean and variance of the pixels of each of the M band images.
  • the feature extraction device normalizes each band image according to a preset formula to obtain a processed image.
  • the above preset formula is:
  • H b is any one of the M band images
  • the mean (H b ) is a mean value of H b
  • the std (H b ) is a variance of H b
  • the R b is a normalization of the H b Process the resulting image.
  • the normalization process may also be a maximum-minimum normalization process and a median normalization process.
  • the gradient of the normalized image R includes gradients R x and R y of the normalized image R in the spatial dimension and a gradient R b in the spectral dimension.
  • the preset gradient template may be [-1, 0, 1].
  • the calculating, by the feature extraction device, the gradient value of the normalized image R according to the preset gradient template refers to calculating a gradient value of the normalized image R in the spatial dimension and the spectral dimension.
  • the above three directions can be regarded as three axes of the space coordinate system.
  • the two directions of the spatial dimension can be represented by X and Y, and the direction of the spectral dimension is represented by B.
  • the process of calculating, by the feature extraction device, the gradient values in the three directions of X, Y, and B according to the preset gradient template [-1, 0, 1] to the image R is as follows:
  • the above Is the gradient value of the pixel point of the coordinate (x, y, b) in the X direction in the normalized image R described above, and the coordinate of R (x+1)yb in the normalized image R is (x)
  • the pixel points of +1, y, b), R (x-1) yb are the pixel points of the above normalized image R whose coordinates are (x-1, y, b).
  • the feature extracting means calculates the normalized image R in three directions (gradient in the spatial dimension and gradient in the spectral dimension). [R x , R y , R b ] can be used to represent the gradient of the normalized image R in three directions,
  • R x and R y represent a gradient of the normalized image R in the spatial dimension
  • R b is a gradient of the normalized image R in the spectral dimension.
  • the pixel value of the pixel of the image R that is not normalized is set to zero.
  • the encoding is performed according to the gradient of the normalized image R to obtain a coding feature map, including:
  • the above S is an image obtained by binarizing the normalized image R
  • the S x and S y are the gradient R y of the normalized image R in the spatial dimension
  • R x is binary
  • the above S b is an image obtained by binarizing the gradient R b of the normalized image in the spectral dimension.
  • the feature extraction device described above performs binarization processing on the normalized image R by the following formula.
  • the formula is as follows:
  • the mean(R) is the mean of the normalized image R pixel values
  • R xyb is the normalized image R on the spatial coordinate axis as shown in FIG. 3 (x, y, z
  • the pixel value of the pixel, the above S xyb is the binarized value of the above R xyb .
  • S xyb is greater than mean(R)
  • S xyb is equal to 1
  • R xyb is less than or equal to mean(R)
  • S xyb is equal to 0.
  • the above feature extraction device performs binarization processing on the above R x by the following formula.
  • the formula is as follows:
  • the mean (R x ) is the mean value of the R x pixel values, Is the pixel value of the pixel of the above R x whose coordinates are (x, y, z) on the spatial coordinate axis as shown in FIG. 3, For the above The value of the binary. When above When it is greater than mean(R x ), Equal to 1; when When less than or equal to mean(R x ), Equal to 0.
  • the above feature extraction device performs binarization processing on the above R y by the following formula.
  • the formula is as follows:
  • the mean(R y ) is a mean value of the R pixel values
  • R xyb is a pixel value of the pixel of the R y at a coordinate of (x, y, z) on a spatial coordinate axis as shown in FIG. 3 .
  • the value of the binarization When above When it is greater than mean(R y ), Equal to 1; when When less than or equal to mean(R y ), Equal to 0.
  • the above feature extraction device performs binarization processing on the normalized image R b by the following formula.
  • the formula is as follows:
  • the mean (R b ) is the mean value of the R b pixel values, The pixel value of the pixel of the above R b having the coordinate of (x, y, z) on the spatial coordinate axis as shown in FIG. 3, For the above The value of the binarization. When above When it is greater than mean(R b ), Equal to 1; when When less than or equal to mean(R b ), Equal to 0.
  • the coded value of the above-mentioned coded feature map Z represents the spatial spectrum structure characteristic around the pixel.
  • the above fusion rule is that the feature extraction means assigns a weight to the above S, S x , S y and S b according to the saliency of the feature.
  • the weight of the S is the first weight A1
  • the weight of the S x is the second weight A2
  • the weight of the S y is the third weight A3
  • the weight of the S b is the fourth value.
  • the weight A4 because the original image (ie, the original hyperspectral remote sensing image) contains rich distribution information of the feature structure, and the spatial characteristics of the feature distribution are more obvious than the spectral features, therefore, A1>A2>A3>A4.
  • the above joint feature map Z A1S+A2S x +A3S y +A4S b
  • the above-mentioned coding feature map Z fuses the original hyperspectral remote sensing image (ie, the above-mentioned original image R) with a stepwise amplitude (ie, S x , S y and S b described above).
  • the above histogram feature is statistically obtained from a cubic neighborhood V around each pixel.
  • the values of S, S x , S y and S b are both 0 or 1
  • the coding feature map Z can have 16 different coding values, respectively 0, 1. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, so the histogram feature around each pixel can be 16 dimensions.
  • the statistical formula is:
  • V x , V y , V b respectively represent the spatial dimension and the spectral dimension of the cubic neighborhood
  • the operator indicates that the data is rounded down
  • h(i) represents the number of times the 3DSF code i appears in the cube neighborhood.
  • the feature extraction device acquires the number of times each of the above codes appears in the cube field, and obtains 16 numbers (ie, h(0), h(1), h(2), ..., h(14), h. (15)).
  • the feature extraction device combines the above-mentioned 16 numbers into an array, and the number of the groups can be regarded as the above-mentioned 3DSF feature.
  • the histogram feature around each pixel is 16 dimensions
  • the original image H is X*Y*B, which can be seen as 16 (X*Y*B) cubes stacked together, so it can be expressed as X* Y*(16*B).
  • the resulting 3DSF feature is This feature F can be used directly for subsequent pixel level classification.
  • the feature extraction device normalizes the original image to obtain a normalized image; secondly, the feature extraction device calculates a normalization according to the preset gradient template. The gradient value of the image; again, the feature extraction device binarizes the normalized image and the gradient thereof to obtain binarized data; finally, the feature extraction device encodes the binarized data and encodes the data according to the encoding
  • the result is a 3DSF feature.
  • FIG. 2 is a schematic flowchart diagram of another feature extraction method for hyperspectral remote sensing images according to an embodiment of the present application. As shown in Figure 2, the method includes:
  • the feature extraction device normalizes the image of each band of the original image H to obtain a normalized image R.
  • the above original image Represents real numbers, X, Y, and B represent the number of spatial and spectral dimensions of the original image, respectively.
  • the feature extraction device divides the original image H into M band images according to different bands in the spectral dimension.
  • the feature extraction device calculates the mean and variance of the pixels of each of the M band images.
  • the feature extraction device normalizes each band image according to a preset formula to obtain a processed image.
  • the above preset formula is:
  • H b is any one of the M band images
  • the mean (H b ) is a mean value of H b
  • the std (H b ) is a variance of H b
  • the R b is a normalization of the H b Process the resulting image.
  • the normalization process may also be a maximum-minimum normalization process and a median normalization process.
  • the feature extraction device calculates a gradient value of the normalized image R according to the preset gradient template.
  • the preset gradient template may be [-1, 0, 1].
  • the calculating, by the feature extraction device, the gradient value of the normalized image R according to the preset gradient template refers to calculating a gradient value of the normalized image R in the spatial dimension and the spectral dimension.
  • the above three directions can be regarded as three axes of the space coordinate system.
  • the two directions of the spatial dimension can be represented by X and Y, and the direction of the spectral dimension is represented by B.
  • the feature extraction device calculates the image R according to the preset gradient template [-1, 0, 1].
  • the process of gradient values in the three directions of X, Y, and B is as follows:
  • the pixel points of +1, y, b) and R (x-1) yb are the pixel points of the above normalized image R whose coordinates are (x-1, y, b).
  • the gradient value in the Y direction of the pixel whose coordinates are (x, y, b) in the normalized image R, and the coordinates of R x(y+1)b in the normalized image R are (
  • the pixel points of x, y+1, b) and R x(y-1)b are the pixel points of the above normalized image R whose coordinates are (x, y-1, b).
  • R xy (b+1) is the coordinate in the normalized image R (x)
  • the pixel of y, b+1) and R xy(b-1) are the pixel points of the normalized image R whose coordinates are (x, y, b-1).
  • the feature extracting means calculates the normalized image R in three directions (gradient in the spatial dimension and gradient in the spectral dimension). [R x , R y , R b ] can be used to represent the gradient of the normalized image R in three directions,
  • R x and R y represent a gradient of the normalized image R in the spatial dimension
  • R b is a gradient of the normalized image R in the spectral dimension.
  • the pixel value of the pixel of the image R that is not normalized is set to zero.
  • the feature extraction means Before calculating the gradient value of the pixel point of the band image, the feature extraction means sets the pixel value of the pixel point of the non-band image to 0, as shown in FIG. 4b. Then, based on the gradient template [-1, 0, 1], the gradient value of each pixel is calculated.
  • the above feature extraction device can calculate the pixel gradient of the band image, as shown in FIG. 4c.
  • 4a is a schematic diagram of pixel points of a band image
  • FIG. 4b is a schematic diagram of pixel points of the processed band image
  • FIG. 4c is a schematic diagram of pixel point gradient values of the band image.
  • the feature extraction device performs binarization processing on the normalized image and the gradient to obtain binarized data.
  • the feature extraction device binarizes the normalized image R and the gradients R x , R y and R b of the image in three directions to obtain a binarized image S, S x , S y and S b .
  • the S image after binarizing processing is above the normalized image R, the above-described S X and S Y to above the normalized image R gradient R Y in the spatial dimension, R X is binarized
  • the above S b is an image obtained by binarizing the gradient R b of the normalized image in the spectral dimension.
  • the feature extraction device described above performs binarization processing on the normalized image R by the following formula.
  • the formula is as follows:
  • the mean(R) is the mean of the normalized image R pixel values
  • R xyb is the normalized image R on the spatial coordinate axis as shown in FIG. 3 (x, y, z
  • the pixel value of the pixel, the above S xyb is the binarized value of the above R xyb .
  • S xyb is greater than mean(R)
  • S xyb is equal to 1
  • R xyb is less than or equal to mean(R)
  • S xyb is equal to 0.
  • the above feature extraction device performs binarization processing on the above R x by the following formula.
  • the formula is as follows:
  • the mean (R x ) is the mean value of the R x pixel values, Is the pixel value of the pixel of the above R x whose coordinates are (x, y, z) on the spatial coordinate axis as shown in FIG. 3, For the above The value of the binarization. When above When it is greater than mean(R x ), Equal to 1; when When less than or equal to mean(R x ), Equal to 0.
  • the above feature extraction device performs binarization processing on the above R y by the following formula.
  • the formula is as follows:
  • the mean(R y ) is a mean value of the R pixel values
  • R xyb is a pixel value of the pixel of the R y at a coordinate of (x, y, z) on a spatial coordinate axis as shown in FIG. 3 .
  • the value of the binarization When above When it is greater than mean(R y ), Equal to 1; when When less than or equal to mean(R y ), Equal to 0.
  • the above feature extraction device performs binarization processing on the normalized image R b by the following formula.
  • the formula is as follows:
  • the mean (R b ) is the mean value of the R b pixel values, The pixel value of the pixel of the above R b having the coordinate of (x, y, z) on the spatial coordinate axis as shown in FIG. 3, For the above The value of the binarization. When above When it is greater than mean(R b ), Equal to 1; when When less than or equal to mean(R b ), Equal to 0.
  • the feature extraction device jointly encodes S, S x , S y , and S b according to a preset fusion rule to obtain a coded feature map.
  • the coded value of the above-mentioned coded feature map Z represents the spatial spectrum structure characteristic around the pixel.
  • the above fusion rule is that the feature extraction means assigns a weight to the above S, S x , S y and S b according to the saliency of the feature.
  • the weight of the S is the first weight A1
  • the weight of the S x is the second weight A2
  • the weight of the S y is the third weight A3
  • the weight of the S b is the fourth value.
  • the weight A4 because the original image (ie, the original hyperspectral remote sensing image) contains rich distribution information of the feature structure, and the spatial characteristics of the feature distribution are more obvious than the spectral features, therefore, A1>A2>A3>A4.
  • the above joint feature map Z A1S+A2S x +A3S y +A4S b
  • the above-mentioned coding feature map Z fuses the original hyperspectral remote sensing image (ie, the above-mentioned original image R) with a stepwise amplitude (ie, S x , S y and S b described above).
  • the feature extraction device performs histogram feature extraction on the encoded feature map to obtain a 3DSF feature.
  • the above histogram feature is statistically obtained from a cubic neighborhood V around each pixel.
  • the values of S, S x , S y and S b are both 0 or 1
  • the coding feature map Z can have 16 different coding values, respectively 0, 1. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, so the histogram feature around each pixel can be 16 dimensions.
  • the statistical formula is:
  • V x , V y , V b respectively represent the spatial dimension and the spectral dimension of the cubic neighborhood
  • the operator indicates that the data is rounded down
  • h(i) represents the number of times the 3DSF code i appears in the cube neighborhood.
  • the feature extraction device acquires the number of times each of the above codes appears in the cube field, and obtains 16 numbers (ie, h(0), h(1), h(2), ..., h(14), h. (15)).
  • the feature extraction device combines the above-mentioned 16 numbers into an array, and the number of the groups can be regarded as the above-mentioned 3DSF feature.
  • the histogram feature around each pixel is 16 dimensions
  • the original image H is X*Y*B, which can be seen as 16 (X*Y*B) cubes stacked together, so it can be expressed as X* Y*(16*B).
  • the resulting 3DSF feature is This feature F can be used directly for subsequent pixel level classification.
  • the feature extraction device normalizes the original image to obtain a normalized image; secondly, the feature extraction device calculates a normalization according to the preset gradient template. The gradient value of the image; again, the feature extraction device binarizes the normalized image and the gradient thereof to obtain binarized data; finally, the feature extraction device pairs the binary value
  • the data is encoded and the 3DSF features are acquired based on the encoded results.
  • FIG. 5 is a schematic structural diagram of a device for extracting features of a hyperspectral remote sensing image according to an embodiment of the present application.
  • the apparatus 500 includes:
  • the normalization processing unit 501 is configured to perform normalization processing on the original image H to obtain a normalized image R.
  • the normalization processing unit 501 includes
  • the obtaining subunit 5011 is configured to obtain the mean and variance of the pixels of the original image H;
  • the processing sub-unit 5012 is configured to perform normalization processing on the original image H according to the mean and variance of the pixels of the original image H to obtain a normalized image R.
  • the calculating unit 502 is configured to calculate a gradient of the normalized image R according to the preset gradient template.
  • the gradient of the normalized image R includes gradients R x and R y of the normalized image R in the spatial dimension and a gradient R b in the spectral dimension.
  • the coding unit 503 is configured to perform coding according to the gradient of the normalized image R to obtain a coded feature map.
  • the coding unit 503 includes:
  • a first calculating subunit 5031 configured to calculate corresponding binarized data S, S according to the normalized image R and the gradients R x and R y in the spatial dimension and the gradient R b in the spectral dimension x , S y and S b ;
  • a second calculating subunit 5032 configured to calculate the encoded feature map according to S, S x , S y , S b and a preset formula, where the preset formula is:
  • Coding feature map 2 3 S+2 2 S x +2 1 S y +2 0 S b ,
  • 2 3 , 2 2 , 2 1 , and 2 0 are the weights of S, S x , S y , and S b , respectively.
  • the feature extraction unit 504 is configured to perform histogram feature extraction on the encoded feature map to obtain a three-dimensional surface feature 3DSF.
  • the feature extraction unit 504 is configured to:
  • a histogram feature representing pixels of coordinates (x, y, b), the V x , V y , V b representing the spatial and spectral dimensions of the cube neighborhood, respectively.
  • the operator indicates that the data is rounded down, h(i) represents the number of times the encoded value i appears in the cube neighborhood, and the array consisting of the number of times is the 3DSF.
  • the hyperspectral remote sensing image feature extraction device 500 is presented in the form of a unit (normalization processing unit 501, calculation unit 502, encoding unit 503, and feature extraction unit 504).
  • a "unit” herein may refer to an application-specific integrated circuit (ASIC), a processor and memory that executes one or more software or firmware programs, integrated logic circuits, and/or other devices that provide the functionality described above. .
  • ASIC application-specific integrated circuit
  • FIG. 6 is a schematic structural diagram of a hyperspectral remote sensing image feature extraction apparatus according to an embodiment of the present application, which is used to implement the feature extraction method of hyperspectral remote sensing image disclosed in the embodiment of the present application.
  • the hyperspectral remote sensing image feature extraction device 600 may include at least one bus 601, at least one processor 602 connected to the bus 601, and at least one memory 603 connected to the bus 601.
  • the processor 602 calls the code stored in the memory through the bus 601 for normalizing the original image H to obtain a normalized image R; and calculating the normalized image R according to the preset gradient template. a gradient; encoding according to the gradient of the normalized image R to obtain a coded feature map; performing histogram feature extraction on the coded feature map to obtain a three-dimensional surface feature 3DSF.
  • the hyperspectral remote sensing image feature extraction device 600 is presented in the form of a unit.
  • the "unit" herein may refer to an application-specific integrated application (application-specific integrated Circuit, ASIC), a processor and memory that executes one or more software or firmware programs, integrated logic circuits, and/or other devices that provide the functions described above.
  • ASIC application-specific integrated Circuit
  • the embodiment of the present application further provides a computer storage medium, wherein the computer storage medium may store a program, where the program includes some or all of the steps of the hyperspectral remote sensing image feature extraction method described in the foregoing method embodiments. .
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or a software function list. The realization of the form of the yuan.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

A feature extraction method for a hyperspectral remotely sensed image, comprising: performing normalization processing on an original image H, to obtain a normalized image R (S101); calculating, according to a preset gradient template, the gradient of the normalized image R (S102); encoding according to the gradient of the normalized image R, to obtain an encoded feature graph (S103); and performing a histogram feature extraction on the encoded feature graph, to obtain a three dimensional surface feature (3DSF) (S104). Further provided is a feature extraction device for a hyperspectral remotely sensed image. The above-mentioned manner can make full use of the three dimensional space-spectral structure of a hyperspectral remotely sensed image, improves classification performance, and reduces time complexity and space complexity.

Description

高光谱遥感图像的特征提取方法及装置Feature extraction method and device for hyperspectral remote sensing image
本申请要求于2017年9月7日递交国家知识产权局、申请号为201710800249.4,发明名称为“高光谱遥感图像的特征提取方法及装置”的国内专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the domestic patent application filed on September 7, 2017, the National Intellectual Property Office, the application number is 201710800249.4, and the invention name is "Feature Extraction Method and Apparatus for Hyperspectral Remote Sensing Images", the entire contents of which are incorporated by reference. In this application.
技术领域Technical field
本申请涉及图像处理领域,具体涉及一种高光谱遥感图像分类方法及装置。The present application relates to the field of image processing, and in particular to a method and apparatus for classifying hyperspectral remote sensing images.
背景技术Background technique
高光谱遥感图像是指由传感器在电磁波谱的可见光,近红外,中红外和热红外波段范围内,在不同波段成像获得的高光谱图像数据。因此,高光谱遥感图像包含了丰富的空间、辐射和光谱三重信息,为地表物质的精细分类和识别提供了可能。Hyperspectral remote sensing image refers to the hyperspectral image data obtained by imaging the sensor in different wavelengths in the visible, near-infrared, mid-infrared and thermal infrared bands of the electromagnetic spectrum. Therefore, hyperspectral remote sensing images contain a wealth of spatial, radiation and spectral triple information, which provides a possibility for the fine classification and identification of surface materials.
在数百个波段上形成的高光谱遥感影像包含了地物丰富的辐射、空间和光谱三重信息,这使得地物的识别和分类更加有效。为了通过高光谱遥感影像对地物实施分类,首先要对其进行特征提取。现有特征提取方法主要包括空间特征提取和空谱特征提取两大类。The hyperspectral remote sensing image formed on hundreds of bands contains the triple information of radiation, space and spectrum, which makes the identification and classification of features more effective. In order to classify features through hyperspectral remote sensing images, feature extraction is first required. The existing feature extraction methods mainly include two categories: spatial feature extraction and spatial spectral feature extraction.
空间特征提取主要利用不同波段的空间信息对高光谱遥感影像进行表达。它首先提取各波段的空间特征,然后将不同波段的空间特征叠加在一起。Gabor和局部二值模式(Local Binary Pattern,LBP)是两种典型的空间特征提取方法。Gabor特征对图像中的光照存在很好的鲁棒性,LBP可以充分利用图像中的局部空间依赖关系。近年来,研究人员从Gabor特征和LBP特征提取方法中受到启发,提出了基于Gabor的表面特征(Gabor Surface Feature,GSF)提取方法。具体来说,首先从二维图像中提取出Gabor幅值特征,然后通过这些Gabor幅值特征的一阶和二阶导数对图像进行表征。GSF的优势在于通过Gabor滤波器获得的特征不受图像光照条件变化的影响。然而,对于高光谱遥感影像来说,GSF提取方法主要存在两点不足。首先,它仅仅针对二维图像的空间关系进行表征,导致高光谱遥感影像的三维空谱结构没有得到充分挖掘。其次,高光谱 遥感影像包含上百个波段,GSF的多尺度分析和直方图特征表示方法使得特征维度大幅膨胀,降低了分类性能,同时增加了时间和空间的复杂度。Spatial feature extraction mainly uses spatial information of different bands to express hyperspectral remote sensing images. It first extracts the spatial features of each band and then overlays the spatial features of the different bands. Gabor and Local Binary Pattern (LBP) are two typical spatial feature extraction methods. Gabor features are robust to illumination in images, and LBP can make full use of local spatial dependencies in images. In recent years, researchers have been inspired by Gabor features and LBP feature extraction methods, and proposed Gabor Surface Feature (GSF) extraction methods. Specifically, the Gabor amplitude features are first extracted from the two-dimensional image, and then the images are characterized by the first and second derivatives of these Gabor amplitude features. The advantage of GSF is that the features obtained by the Gabor filter are not affected by changes in image illumination conditions. However, for hyperspectral remote sensing images, there are two main disadvantages in the GSF extraction method. First, it only characterizes the spatial relationship of two-dimensional images, resulting in the three-dimensional spatial spectrum structure of hyperspectral remote sensing images not fully exploited. Second, hyperspectral The remote sensing image contains hundreds of bands. The multi-scale analysis and histogram feature representation of GSF greatly expands the feature dimension, which reduces the classification performance and increases the complexity of time and space.
发明内容Summary of the invention
本申请实施例提供了一种高光谱遥感图像的特征提取方法及装置,可以充分利用高光谱遥感影像的三维空谱结构,提高了分类性能,减小了时间和空间的复杂度。The embodiment of the present application provides a feature extraction method and device for hyperspectral remote sensing images, which can fully utilize the three-dimensional spatial spectrum structure of hyperspectral remote sensing images, improve classification performance, and reduce time and space complexity.
第一方面,本申请实施例提供一种高光谱遥感图像的特征提取方法,其特征在于,所述方法包括:In a first aspect, an embodiment of the present application provides a feature extraction method for a hyperspectral remote sensing image, where the method includes:
对原始图像H进行归一化处理,得到归一化的图像R;Normalizing the original image H to obtain a normalized image R;
根据预设梯度模板计算所述归一化的图像R的梯度;Calculating a gradient of the normalized image R according to a preset gradient template;
根据所述归一化的图像R的梯度进行编码,得到编码特征图谱;Encoding according to the gradient of the normalized image R to obtain a coded feature map;
对所述编码特征图谱进行直方图特征提取,得到三维表面特征(3dimension surface features,3DSF)。The histogram feature extraction is performed on the encoded feature map to obtain 3D surface surface features (3DSF).
第二方面,本申请实施例提供一种高光谱遥感图像的特征提取装置,其特征在于,所述装置包括:In a second aspect, the embodiment of the present application provides a feature extraction device for a hyperspectral remote sensing image, where the device includes:
归一化处理单元,用于对原始图像H进行归一化处理,得到归一化的图像R;a normalization processing unit for normalizing the original image H to obtain a normalized image R;
计算单元,用于根据预设梯度模板计算所述归一化的图像R的梯度;a calculating unit, configured to calculate a gradient of the normalized image R according to a preset gradient template;
编码单元,用于根据所述归一化的图像R的梯度进行编码,得到编码特征图谱;a coding unit, configured to perform coding according to the gradient of the normalized image R to obtain a coded feature map;
特征提取单元,用于对所述编码特征图谱进行直方图特征提取,得到三维表面特征3DSF。And a feature extraction unit, configured to perform histogram feature extraction on the encoded feature map to obtain a three-dimensional surface feature 3DSF.
可以看出,在本申请实施例的方案中,首先、上述特征提取装置对原始图像进行归一化处理,得到归一化的图像;其次、上述特征提取装置根据预设梯度模板计算归一化的图像的梯度值;再次,上述特征提取装置对归一化的图像和其梯度进行二值化处理,得到二值化数据;最后,上述特征提取装置对二值化数据进行编码,并根据编码结果获取3DSF特征。通过该方法,可以充分利用高光谱遥感影像的三维空谱结构,提高了分类性能,减小了时间和空间的复 杂度。It can be seen that, in the solution of the embodiment of the present application, first, the feature extraction device normalizes the original image to obtain a normalized image; secondly, the feature extraction device calculates a normalization according to the preset gradient template. The gradient value of the image; again, the feature extraction device binarizes the normalized image and the gradient thereof to obtain binarized data; finally, the feature extraction device encodes the binarized data and encodes the data according to the encoding The result is a 3DSF feature. By this method, the three-dimensional spatial spectrum structure of the hyperspectral remote sensing image can be fully utilized, the classification performance is improved, and the time and space are reduced. Miscellaneous.
附图说明DRAWINGS
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present application, and other drawings can be obtained according to the drawings without any creative work for those skilled in the art.
图1为本申请实施例提供的一种高光谱遥感图像的特征提取方法流程示意图;FIG. 1 is a schematic flowchart of a method for extracting features of a hyperspectral remote sensing image according to an embodiment of the present application;
图2为本申请实施例提供的另一种高光谱遥感图像的特征提取方法流程示意图;2 is a schematic flow chart of another feature extraction method for hyperspectral remote sensing images according to an embodiment of the present application;
图3为一种三维坐标系示意图;Figure 3 is a schematic diagram of a three-dimensional coordinate system;
图4a为一种波段图像的像素点示意图;4a is a schematic diagram of pixel points of a band image;
图4b为处理后的波段图像的像素点示意图;Figure 4b is a schematic diagram of pixel points of the processed band image;
图4c为波段图像的像素点梯度值示意图;4c is a schematic diagram of pixel point gradient values of a band image;
图5为本申请实施例提供的一种高光谱遥感图像的特征提取装置的结构示意图;FIG. 5 is a schematic structural diagram of a feature extraction apparatus for a hyperspectral remote sensing image according to an embodiment of the present disclosure;
图6为本申请实施例提供的另一种高光谱遥感图像的特征提取装置的结构示意图。FIG. 6 is a schematic structural diagram of another feature extraction apparatus for hyperspectral remote sensing images according to an embodiment of the present application.
具体实施方式Detailed ways
本申请实施例提供了一种高光谱遥感图像特征提取方法及装置,可以充分利用高光谱遥感影像的三维空谱结构,并提高了分类性能,减小了时间和空间的复杂度。The embodiment of the present application provides a method and a device for extracting features of a hyperspectral remote sensing image, which can fully utilize the three-dimensional spatial spectrum structure of the hyperspectral remote sensing image, improve the classification performance, and reduce the complexity of time and space.
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所 有其他实施例,都应当属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present application. It is an embodiment of the present application, and not all of the embodiments. Based on the embodiments in the present application, those obtained by those of ordinary skill in the art without creative efforts Other embodiments are intended to fall within the scope of the present disclosure.
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”和“第三”等是用于区别不同对象,而非用于描述特定顺序。此外,术语“包括”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、***、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second" and "third" and the like in the specification and claims of the present application and the above-mentioned drawings are used to distinguish different objects, and are not intended to describe a specific order. Moreover, the term "comprise" and any variants thereof are intended to cover a non-exclusive inclusion. For example, a process, method, system, product, or device that comprises a series of steps or units is not limited to the listed steps or units, but optionally also includes steps or units not listed, or, optionally, Other steps or units inherent to these processes, methods, products or equipment.
本申请实施例提供的一种高光谱遥感图像特征提取方法,包括:A method for extracting features of a hyperspectral remote sensing image provided by an embodiment of the present application includes:
对原始图像H进行归一化处理,得到归一化的图像R;根据预设梯度模板计算所述归一化的图像R的梯度;根据所述归一化的图像R的梯度进行编码,得到编码特征图谱;对所述编码特征图谱进行直方图特征提取,得到3DSF。Normalizing the original image H to obtain a normalized image R; calculating a gradient of the normalized image R according to a preset gradient template; encoding according to the gradient of the normalized image R, Coding a feature map; performing histogram feature extraction on the coded feature map to obtain a 3DSF.
参见图1,图1是本申请实施例提供的一种高光谱遥感图像特征提取方法的第一实施例流程示意图。如图1所示,本申请实施例提供的高光谱遥感图像特征提取方法包括以下步骤:Referring to FIG. 1, FIG. 1 is a schematic flowchart diagram of a first embodiment of a method for extracting features of a hyperspectral remote sensing image according to an embodiment of the present application. As shown in FIG. 1 , the method for extracting features of a hyperspectral remote sensing image provided by an embodiment of the present application includes the following steps:
S101、对原始图像H进行归一化处理,得到归一化的图像R。S101: Normalize the original image H to obtain a normalized image R.
其中,所述对原始图像H进行归一化处理,得到归一化的图像R包括:The normalizing the original image H to obtain the normalized image R includes:
获取原始图像H的像素的均值和方差;Obtaining the mean and variance of the pixels of the original image H;
根据所述原始图像H的像素的均值和方差对所述原始图像H进行归一化处理,得到归一化的图像R。The original image H is normalized according to the mean and variance of the pixels of the original image H to obtain a normalized image R.
具体地,上述特征提取装置在光谱维度上,根据不同的波段将上述原始图像H划分为M个波段图像。上述特征提取装置计算M个波段图像中每个波段图像的像素的均值和方差。然后,上述特征提取装置根据预设公式对每个波段图像进行归一化处理,得到处理后的图像。上述预设公式为:Specifically, the feature extraction device divides the original image H into M band images according to different bands in the spectral dimension. The feature extraction device calculates the mean and variance of the pixels of each of the M band images. Then, the feature extraction device normalizes each band image according to a preset formula to obtain a processed image. The above preset formula is:
Figure PCTCN2017101353-appb-000001
Figure PCTCN2017101353-appb-000001
其中,Hb为上述M个波段图像中的任意一个,上述mean(Hb)为Hb的均值,上述std(Hb)为Hb的方差,上述Rb为对上述Hb进行归一化处理得到的图像。 Wherein H b is any one of the M band images, the mean (H b ) is a mean value of H b , the std (H b ) is a variance of H b , and the R b is a normalization of the H b Process the resulting image.
进一步地,对上述原始图像H的每个波段图像进行归一化处理后,得到归一化的图像R,且
Figure PCTCN2017101353-appb-000002
Further, after normalizing each band image of the original image H, a normalized image R is obtained, and
Figure PCTCN2017101353-appb-000002
可选地,上述归一化处理还可为最大最小值归一化处理、中值归一化处理。Optionally, the normalization process may also be a maximum-minimum normalization process and a median normalization process.
S102、根据预设梯度模板计算所述归一化的图像R的梯度。S102. Calculate a gradient of the normalized image R according to a preset gradient template.
其中,所述归一化的图像R的梯度包括所述归一化的图像R在空间维度上的梯度Rx和Ry与光谱维度上的梯度RbThe gradient of the normalized image R includes gradients R x and R y of the normalized image R in the spatial dimension and a gradient R b in the spectral dimension.
可选地,上述预设梯度模板可为[-1,0,1]。Optionally, the preset gradient template may be [-1, 0, 1].
其中,上述特征提取装置根据预设梯度模板计算归一化的图像R的梯度值是指计算上述归一化的图像R在空间维度上和光谱维度上的梯度值。如3所示,上述三个方向可以看成空间坐标系的三个轴,上述空间维度的两个方向可以用X、Y表示,光谱维度的方向用B表示。The calculating, by the feature extraction device, the gradient value of the normalized image R according to the preset gradient template refers to calculating a gradient value of the normalized image R in the spatial dimension and the spectral dimension. As shown in 3, the above three directions can be regarded as three axes of the space coordinate system. The two directions of the spatial dimension can be represented by X and Y, and the direction of the spectral dimension is represented by B.
具体地,上述特征提取装置对上述图像R,根据预设梯度模板[-1,0,1]计算X、Y、B三个方向的梯度值的过程如下:Specifically, the process of calculating, by the feature extraction device, the gradient values in the three directions of X, Y, and B according to the preset gradient template [-1, 0, 1] to the image R is as follows:
Figure PCTCN2017101353-appb-000003
Figure PCTCN2017101353-appb-000003
其中,上述
Figure PCTCN2017101353-appb-000004
为在上述归一化的图像R中坐标为(x,y,b)的像素点在X方向上的梯度值,R(x+1)yb在上述归一化的图像R中坐标为(x+1,y,b)的像素点,R(x-1)yb为上述归一化的图像R中坐标为(x-1,y,b)的像素点。
Among them, the above
Figure PCTCN2017101353-appb-000004
Is the gradient value of the pixel point of the coordinate (x, y, b) in the X direction in the normalized image R described above, and the coordinate of R (x+1)yb in the normalized image R is (x) The pixel points of +1, y, b), R (x-1) yb are the pixel points of the above normalized image R whose coordinates are (x-1, y, b).
上述
Figure PCTCN2017101353-appb-000005
为在上述归一化的图像R中坐标为(x,y,b)的像素点在Y方向上的梯度值,Rx(y+1)b在上述归一化的图像R中坐标为(x,y+1,b)的像素点,Rx(y-1)b为上述归一化的图像R中坐标为(x,y-1,b)的像素点。
Above
Figure PCTCN2017101353-appb-000005
Is the gradient value of the pixel point of the coordinate (x, y, b) in the Y direction in the normalized image R described above, and the coordinates of R x(y+1)b in the normalized image R described above are ( The pixel points of x, y+1, b), R x(y-1)b are the pixel points of the above normalized image R whose coordinates are (x, y-1, b).
上述
Figure PCTCN2017101353-appb-000006
为在上述归一化的图像R中坐标为(x,y,b)的像素点在B方向上的梯度值,Rxy(b+1)在上述归一化的图像R中坐标为(x,y,b+1)的像素点、Rxy(b-1)为上述归一化的图像R中坐标为(x,y,b-1)的像素点。
Above
Figure PCTCN2017101353-appb-000006
Is the gradient value of the pixel point of the coordinate (x, y, b) in the B direction in the normalized image R described above, and the coordinate of R xy(b+1) in the normalized image R is (x) The pixel of y, b+1) and R xy(b-1) are the pixel points of the normalized image R whose coordinates are (x, y, b-1).
按照上述方法,上述特征提取装置计算得到上述归一化的图像R在三个方向(空间维度上的梯度和光谱维度上的梯度)。可用[Rx,Ry,Rb]表示上述归一化的图像R在三个方向上的梯度,According to the above method, the feature extracting means calculates the normalized image R in three directions (gradient in the spatial dimension and gradient in the spectral dimension). [R x , R y , R b ] can be used to represent the gradient of the normalized image R in three directions,
其中,上述Rx和Ry表示上述归一化的图像R在空间维度上的梯度,上述Rb 为上述归一化的图像R在光谱维度上的梯度。Wherein R x and R y represent a gradient of the normalized image R in the spatial dimension, and R b is a gradient of the normalized image R in the spectral dimension.
需要说明的是,在进行计算像素点的梯度之前,上述将非上述归一化的图像R的像素点的像素值置为0。It should be noted that, before the gradient of the pixel is calculated, the pixel value of the pixel of the image R that is not normalized is set to zero.
S103、根据所述归一化的图像R的梯度进行编码,得到编码特征图谱。S103. Perform encoding according to the gradient of the normalized image R to obtain a coded feature map.
其中,所述根据所述归一化的图像R的梯度进行编码,得到编码特征图谱,包括:The encoding is performed according to the gradient of the normalized image R to obtain a coding feature map, including:
根据所述归一化的图像R及其空间维度上的梯度Rx和Ry与光谱维度上的梯度Rb,计算得到对应的二值化数据S、Sx、Sy和SbCalculating corresponding binarized data S, S x , S y and S b according to the normalized image R and the gradients R x and R y in the spatial dimension and the gradient R b in the spectral dimension;
根据S、Sx、Sy、Sb和预设公式计算得到所述编码特征图谱,所述预设公式为:编码特征图谱=23S+22Sx+21Sy+20Sb,其中,23、22、21、20分别为S、Sx、Sy、Sb的权值。Calculating the coded feature map according to S, S x , S y , S b and a preset formula, the preset formula is: coded feature map = 2 3 S+2 2 S x + 2 1 S y + 2 0 S b , where 2 3 , 2 2 , 2 1 , and 2 0 are weights of S, S x , S y , and S b , respectively.
具体地,上述S为上述归一化的图像R进行二值化处理后的图像,上述Sx和Sy为上述归一化的图像R在空间维度上的梯度Ry,Rx进行二值化处理后的图像,上述Sb为上述归一化的图像在光谱维度上的梯度Rb进行二值化处理后的图像。Specifically, the above S is an image obtained by binarizing the normalized image R, and the S x and S y are the gradient R y of the normalized image R in the spatial dimension, and R x is binary. In the processed image, the above S b is an image obtained by binarizing the gradient R b of the normalized image in the spectral dimension.
示例性地,上述特征提取装置通过如下公式对上述归一化的图像R进行二值化处理。该公式具体如下:Illustratively, the feature extraction device described above performs binarization processing on the normalized image R by the following formula. The formula is as follows:
Figure PCTCN2017101353-appb-000007
Figure PCTCN2017101353-appb-000007
其中,上述mean(R)为上述归一化的图像R像素值的均值,Rxyb为上述归一化的图像R的在如图3所示的空间坐标轴上坐标为(x,y,z)的像素点的像素值,上述Sxyb为上述Rxyb的二值化的值。当上述Rxyb大于mean(R)时,Sxyb等于1;当上述Rxyb小于或者等于mean(R)时,Sxyb等于0。Wherein the mean(R) is the mean of the normalized image R pixel values, and R xyb is the normalized image R on the spatial coordinate axis as shown in FIG. 3 (x, y, z The pixel value of the pixel, the above S xyb is the binarized value of the above R xyb . When R xyb is greater than mean(R), S xyb is equal to 1; when R xyb is less than or equal to mean(R), S xyb is equal to 0.
同理,上述特征提取装置通过如下公式对上述Rx进行二值化处理。该公式具体如下:Similarly, the above feature extraction device performs binarization processing on the above R x by the following formula. The formula is as follows:
Figure PCTCN2017101353-appb-000008
Figure PCTCN2017101353-appb-000008
其中,上述mean(Rx)为上述Rx像素值的均值,
Figure PCTCN2017101353-appb-000009
为上述Rx的在如图3所示的空间坐标轴上坐标为(x,y,z)的像素点的像素值,上述
Figure PCTCN2017101353-appb-000010
为上述
Figure PCTCN2017101353-appb-000011
的二 值化的值。当上述
Figure PCTCN2017101353-appb-000012
大于mean(Rx)时,
Figure PCTCN2017101353-appb-000013
等于1;当上述
Figure PCTCN2017101353-appb-000014
小于或者等于mean(Rx)时,
Figure PCTCN2017101353-appb-000015
等于0。
Wherein the mean (R x ) is the mean value of the R x pixel values,
Figure PCTCN2017101353-appb-000009
Is the pixel value of the pixel of the above R x whose coordinates are (x, y, z) on the spatial coordinate axis as shown in FIG. 3,
Figure PCTCN2017101353-appb-000010
For the above
Figure PCTCN2017101353-appb-000011
The value of the binary. When above
Figure PCTCN2017101353-appb-000012
When it is greater than mean(R x ),
Figure PCTCN2017101353-appb-000013
Equal to 1; when
Figure PCTCN2017101353-appb-000014
When less than or equal to mean(R x ),
Figure PCTCN2017101353-appb-000015
Equal to 0.
同理,上述特征提取装置通过如下公式对上述Ry进行二值化处理。该公式具体如下:Similarly, the above feature extraction device performs binarization processing on the above R y by the following formula. The formula is as follows:
Figure PCTCN2017101353-appb-000016
Figure PCTCN2017101353-appb-000016
其中,上述mean(Ry)为上述R像素值的均值,Rxyb为上述Ry的在如图3所示的空间坐标轴上坐标为(x,y,z)的像素点的像素值,上述
Figure PCTCN2017101353-appb-000017
为上述
Figure PCTCN2017101353-appb-000018
的二值化的值。当上述
Figure PCTCN2017101353-appb-000019
大于mean(Ry)时,
Figure PCTCN2017101353-appb-000020
等于1;当上述
Figure PCTCN2017101353-appb-000021
小于或者等于mean(Ry)时,
Figure PCTCN2017101353-appb-000022
等于0。
Wherein the mean(R y ) is a mean value of the R pixel values, and R xyb is a pixel value of the pixel of the R y at a coordinate of (x, y, z) on a spatial coordinate axis as shown in FIG. 3 . Above
Figure PCTCN2017101353-appb-000017
For the above
Figure PCTCN2017101353-appb-000018
The value of the binarization. When above
Figure PCTCN2017101353-appb-000019
When it is greater than mean(R y ),
Figure PCTCN2017101353-appb-000020
Equal to 1; when
Figure PCTCN2017101353-appb-000021
When less than or equal to mean(R y ),
Figure PCTCN2017101353-appb-000022
Equal to 0.
同理,上述特征提取装置通过如下公式对上述归一化的图像Rb进行二值化处理。该公式具体如下:Similarly, the above feature extraction device performs binarization processing on the normalized image R b by the following formula. The formula is as follows:
Figure PCTCN2017101353-appb-000023
Figure PCTCN2017101353-appb-000023
其中,上述mean(Rb)为上述Rb像素值的均值,
Figure PCTCN2017101353-appb-000024
为上述Rb的在如图3所示的空间坐标轴上坐标为(x,y,z)的像素点的像素值,上述
Figure PCTCN2017101353-appb-000025
为上述
Figure PCTCN2017101353-appb-000026
的二值化的值。当上述
Figure PCTCN2017101353-appb-000027
大于mean(Rb)时,
Figure PCTCN2017101353-appb-000028
等于1;当上述
Figure PCTCN2017101353-appb-000029
小于或者等于mean(Rb)时,
Figure PCTCN2017101353-appb-000030
等于0。
Wherein the mean (R b ) is the mean value of the R b pixel values,
Figure PCTCN2017101353-appb-000024
The pixel value of the pixel of the above R b having the coordinate of (x, y, z) on the spatial coordinate axis as shown in FIG. 3,
Figure PCTCN2017101353-appb-000025
For the above
Figure PCTCN2017101353-appb-000026
The value of the binarization. When above
Figure PCTCN2017101353-appb-000027
When it is greater than mean(R b ),
Figure PCTCN2017101353-appb-000028
Equal to 1; when
Figure PCTCN2017101353-appb-000029
When less than or equal to mean(R b ),
Figure PCTCN2017101353-appb-000030
Equal to 0.
其中,上述编码特征图谱Z的编码值表示像素周围的空谱结构特性。上述融合规则为上述特征提取装置根据特征的显著性为上述S,Sx,Sy和Sb分配权值。The coded value of the above-mentioned coded feature map Z represents the spatial spectrum structure characteristic around the pixel. The above fusion rule is that the feature extraction means assigns a weight to the above S, S x , S y and S b according to the saliency of the feature.
具体地,上述S的权值为第一权值A1,上述Sx的权值为第二权值A2,上述Sy的权值为第三权值A3,上述Sb的权值为第四权值A4,由于上述原始图像(即原始高光谱遥感影像)包含了丰富的地物结构分布信息,且地物分布的空间特征比光谱特征更为明显,因此,A1>A2>A3>A4。上述联合特征图谱Z=A1S+A2Sx+A3Sy+A4Sb Specifically, the weight of the S is the first weight A1, the weight of the S x is the second weight A2, the weight of the S y is the third weight A3, and the weight of the S b is the fourth value. The weight A4, because the original image (ie, the original hyperspectral remote sensing image) contains rich distribution information of the feature structure, and the spatial characteristics of the feature distribution are more obvious than the spectral features, therefore, A1>A2>A3>A4. The above joint feature map Z=A1S+A2S x +A3S y +A4S b
举例说明,假设上述S,Sx,Sy和Sb的权值分别设置为23,22,21和20,具体编码方法表示为:For example, assume that the weights of S, S x , S y and S b are set to 2 3 , 2 2 , 2 1 and 2 0 respectively , and the specific coding method is expressed as:
Z=23S+22Sx+21Sy+20Sb Z=2 3 S+2 2 S x +2 1 S y +2 0 S b
其中,上述编码特征图谱Z将原始高光谱遥感影像(即上述原始图像R)和一阶梯度幅值(即上述Sx,Sy和Sb)融合在一起。Wherein, the above-mentioned coding feature map Z fuses the original hyperspectral remote sensing image (ie, the above-mentioned original image R) with a stepwise amplitude (ie, S x , S y and S b described above).
S104、对所述编码特征图谱进行直方图特征提取,得到三维表面特征3DSF。S104. Perform histogram feature extraction on the coding feature map to obtain a three-dimensional surface feature 3DSF.
其中,上述直方图特征是在每个像素周围的一个立方体邻域V统计得到的。根据上述步骤S204中Z的计算公式可知S,Sx,Sy和Sb的取值均为0或者1,进而可知编码特征图谱Z可以有16个不同的编码值,分别为0、1、2、3、4、5、6、7、8、9、10、11、12、13、14、15,因此每个像素周围统计的直方图特征可为16维。假定
Figure PCTCN2017101353-appb-000031
表示坐标为(x,y,b)的像素的直方图特征,那么统计公式为:
Wherein, the above histogram feature is statistically obtained from a cubic neighborhood V around each pixel. According to the calculation formula of Z in the above step S204, it can be seen that the values of S, S x , S y and S b are both 0 or 1, and it can be seen that the coding feature map Z can have 16 different coding values, respectively 0, 1. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, so the histogram feature around each pixel can be 16 dimensions. assumed
Figure PCTCN2017101353-appb-000031
To represent the histogram features of pixels with coordinates (x, y, b), then the statistical formula is:
Figure PCTCN2017101353-appb-000032
Figure PCTCN2017101353-appb-000032
其中,上述Vx,Vy,Vb分别表示立方体邻域的空间维度和光谱维度,
Figure PCTCN2017101353-appb-000033
运算符表示对数据向下取整,h(i)表示3DSF编码i在立方体邻域中出现的次数。当Z(x+j,y+k,b+l)=i时,上述h(i)等于1;当Z(x+j,y+k,b+l)≠i时,上述h(i)等于0。
Wherein, the above V x , V y , V b respectively represent the spatial dimension and the spectral dimension of the cubic neighborhood,
Figure PCTCN2017101353-appb-000033
The operator indicates that the data is rounded down, and h(i) represents the number of times the 3DSF code i appears in the cube neighborhood. When Z(x+j, y+k, b+l)=i, the above h(i) is equal to 1; when Z(x+j, y+k, b+l)≠i, the above h(i) ) is equal to 0.
进一步地,上述特征提取装置获取上述每个编码在在立方体领域中出现的次数,得到16个数(即h(0)、h(1)、h(2)、…、h(14)、h(15))。上述特征提取装置将上述16个数组成一个数组,该组数可以看成上述3DSF特征。Further, the feature extraction device acquires the number of times each of the above codes appears in the cube field, and obtains 16 numbers (ie, h(0), h(1), h(2), ..., h(14), h. (15)). The feature extraction device combines the above-mentioned 16 numbers into an array, and the number of the groups can be regarded as the above-mentioned 3DSF feature.
换言之,每个像素周围统计的直方图特征为16维,而原始图像H是X*Y*B,可以看成16个(X*Y*B)的立方叠在一起,所以可以表示为X*Y*(16*B)。对于原始图像H来说,最终得到的3DSF特征为
Figure PCTCN2017101353-appb-000034
该特征F可直接用于后续的像素级分类。
In other words, the histogram feature around each pixel is 16 dimensions, and the original image H is X*Y*B, which can be seen as 16 (X*Y*B) cubes stacked together, so it can be expressed as X* Y*(16*B). For the original image H, the resulting 3DSF feature is
Figure PCTCN2017101353-appb-000034
This feature F can be used directly for subsequent pixel level classification.
可以看出,在本申请实施例的方案中,首先、上述特征提取装置对原始图像进行归一化处理,得到归一化的图像;其次、上述特征提取装置根据预设梯度模板计算归一化的图像的梯度值;再次,上述特征提取装置对归一化的图像和其梯度进行二值化处理,得到二值化数据;最后,上述特征提取装置对二值化数据进行编码,并根据编码结果获取3DSF特征。通过该方法,可以充分利 用高光谱遥感影像的三维空谱结构,提高了分类性能,减小了时间和空间的复杂度。It can be seen that, in the solution of the embodiment of the present application, first, the feature extraction device normalizes the original image to obtain a normalized image; secondly, the feature extraction device calculates a normalization according to the preset gradient template. The gradient value of the image; again, the feature extraction device binarizes the normalized image and the gradient thereof to obtain binarized data; finally, the feature extraction device encodes the binarized data and encodes the data according to the encoding The result is a 3DSF feature. Through this method, it can be fully profitable The three-dimensional spatial spectrum structure of hyperspectral remote sensing images improves classification performance and reduces time and space complexity.
参见图2,图2为本申请实施例提供的另一种高光谱遥感图像的特征提取方法的流程示意图。如图2所示,该方法包括:Referring to FIG. 2, FIG. 2 is a schematic flowchart diagram of another feature extraction method for hyperspectral remote sensing images according to an embodiment of the present application. As shown in Figure 2, the method includes:
S201、特征提取装置对原始图像H的每一个波段的图像进行归一化处理,得到归一化的图像R。S201. The feature extraction device normalizes the image of each band of the original image H to obtain a normalized image R.
其中,上述原始图像
Figure PCTCN2017101353-appb-000035
表示实数,X,Y,B分别表示该原始图像的空间维度和光谱维度的个数。
Wherein, the above original image
Figure PCTCN2017101353-appb-000035
Represents real numbers, X, Y, and B represent the number of spatial and spectral dimensions of the original image, respectively.
需要指出的是,上述原始图像H为高光谱遥感影像。It should be noted that the above original image H is a hyperspectral remote sensing image.
具体地,上述特征提取装置在光谱维度上,根据不同的波段将上述原始图像H划分为M个波段图像。上述特征提取装置计算M个波段图像中每个波段图像的像素的均值和方差。然后,上述特征提取装置根据预设公式对每个波段图像进行归一化处理,得到处理后的图像。上述预设公式为:Specifically, the feature extraction device divides the original image H into M band images according to different bands in the spectral dimension. The feature extraction device calculates the mean and variance of the pixels of each of the M band images. Then, the feature extraction device normalizes each band image according to a preset formula to obtain a processed image. The above preset formula is:
Figure PCTCN2017101353-appb-000036
Figure PCTCN2017101353-appb-000036
其中,Hb为上述M个波段图像中的任意一个,上述mean(Hb)为Hb的均值,上述std(Hb)为Hb的方差,上述Rb为对上述Hb进行归一化处理得到的图像。Wherein H b is any one of the M band images, the mean (H b ) is a mean value of H b , the std (H b ) is a variance of H b , and the R b is a normalization of the H b Process the resulting image.
进一步地,对上述原始图像H的每个波段图像进行归一化处理后,得到归一化的图像R,且
Figure PCTCN2017101353-appb-000037
Further, after normalizing each band image of the original image H, a normalized image R is obtained, and
Figure PCTCN2017101353-appb-000037
可选地,上述归一化处理还可为最大最小值归一化处理、中值归一化处理。Optionally, the normalization process may also be a maximum-minimum normalization process and a median normalization process.
S202、特征提取装置根据预设梯度模板计算归一化的图像R的梯度值。S202. The feature extraction device calculates a gradient value of the normalized image R according to the preset gradient template.
可选地,上述预设梯度模板可为[-1,0,1]。Optionally, the preset gradient template may be [-1, 0, 1].
其中,上述特征提取装置根据预设梯度模板计算归一化的图像R的梯度值是指计算上述归一化的图像R在空间维度上和光谱维度上的梯度值。如3所示,上述三个方向可以看成空间坐标系的三个轴,上述空间维度的两个方向可以用X、Y表示,光谱维度的方向用B表示。The calculating, by the feature extraction device, the gradient value of the normalized image R according to the preset gradient template refers to calculating a gradient value of the normalized image R in the spatial dimension and the spectral dimension. As shown in 3, the above three directions can be regarded as three axes of the space coordinate system. The two directions of the spatial dimension can be represented by X and Y, and the direction of the spectral dimension is represented by B.
具体地,上述特征提取装置对上述图像R,根据预设梯度模板[-1,0,1]计算 X、Y、B三个方向的梯度值的过程如下:Specifically, the feature extraction device calculates the image R according to the preset gradient template [-1, 0, 1]. The process of gradient values in the three directions of X, Y, and B is as follows:
Figure PCTCN2017101353-appb-000038
Figure PCTCN2017101353-appb-000038
其中,上述
Figure PCTCN2017101353-appb-000039
为在上述归一化的图像R中坐标为(x,y,b)的像素点在X方向上的梯度值、R(x+1)yb在上述归一化的图像R中坐标为(x+1,y,b)的像素点、R(x-1)yb为上述归一化的图像R中坐标为(x-1,y,b)的像素点。
Among them, the above
Figure PCTCN2017101353-appb-000039
The gradient value of the pixel point of the coordinate (x, y, b) in the above normalized image R in the X direction, R (x+1) yb is the coordinate in the normalized image R (x) The pixel points of +1, y, b) and R (x-1) yb are the pixel points of the above normalized image R whose coordinates are (x-1, y, b).
上述
Figure PCTCN2017101353-appb-000040
为在上述归一化的图像R中坐标为(x,y,b)的像素点在Y方向上的梯度值、Rx(y+1)b在上述归一化的图像R中坐标为(x,y+1,b)的像素点、Rx(y-1)b为上述归一化的图像R中坐标为(x,y-1,b)的像素点。
Above
Figure PCTCN2017101353-appb-000040
The gradient value in the Y direction of the pixel whose coordinates are (x, y, b) in the normalized image R, and the coordinates of R x(y+1)b in the normalized image R are ( The pixel points of x, y+1, b) and R x(y-1)b are the pixel points of the above normalized image R whose coordinates are (x, y-1, b).
上述
Figure PCTCN2017101353-appb-000041
为在上述归一化的图像R中坐标为(x,y,b)的像素点在B方向上的梯度值、Rxy(b+1)在上述归一化的图像R中坐标为(x,y,b+1)的像素点、Rxy(b-1)为上述归一化的图像R中坐标为(x,y,b-1)的像素点。
Above
Figure PCTCN2017101353-appb-000041
The gradient value in the B direction of the pixel whose coordinates are (x, y, b ) in the normalized image R, R xy (b+1) is the coordinate in the normalized image R (x) The pixel of y, b+1) and R xy(b-1) are the pixel points of the normalized image R whose coordinates are (x, y, b-1).
按照上述方法,上述特征提取装置计算得到上述归一化的图像R在三个方向(空间维度上的梯度和光谱维度上的梯度)。可用[Rx,Ry,Rb]表示上述归一化的图像R在三个方向上的梯度,According to the above method, the feature extracting means calculates the normalized image R in three directions (gradient in the spatial dimension and gradient in the spectral dimension). [R x , R y , R b ] can be used to represent the gradient of the normalized image R in three directions,
其中,上述Rx和Ry表示上述归一化的图像R在空间维度上的梯度,上述Rb为上述归一化的图像R在光谱维度上的梯度。Wherein R x and R y represent a gradient of the normalized image R in the spatial dimension, and R b is a gradient of the normalized image R in the spectral dimension.
需要说明的是,在进行计算像素点的梯度之前,上述将非上述归一化的图像R的像素点的像素值置为0。It should be noted that, before the gradient of the pixel is calculated, the pixel value of the pixel of the image R that is not normalized is set to zero.
举例说明,假设上述归一化的图像R中一个波段图像如图4a所示,该波段图像的分辨率为4*4,其每个像素值如图4a所示。在计算该波段图像的像素点的梯度值之前,上述特征提取装置将非波段图像的像素点的像素值置为0,如图4b所示。然后根据梯度模板[-1,0,1],计算每个像素点的梯度值。比如,第一行第一个像素点的像素值P11=1,则该像素点的梯度值=P12-0=2;第二航第三个像素点的像素值P23=0,该像素点的梯度值=P24-P22=2-1=1。依此类推,上述特征提取装置可计算得到该波段图像的像素梯度,如图4c所示。图4a为一种波段图像的像素点示意图;图4b为处理后的波段图像的像素点示意图;图4c为波段图像的像素点梯度值示意图。 For example, assume that a band image in the normalized image R is as shown in FIG. 4a, and the resolution of the band image is 4*4, and each pixel value is as shown in FIG. 4a. Before calculating the gradient value of the pixel point of the band image, the feature extraction means sets the pixel value of the pixel point of the non-band image to 0, as shown in FIG. 4b. Then, based on the gradient template [-1, 0, 1], the gradient value of each pixel is calculated. For example, if the pixel value P 11 =1 of the first pixel of the first row, the gradient value of the pixel point=P 12 —0=2; the pixel value of the third pixel of the second navigation P 23 =0, The gradient value of the pixel = P 24 - P 22 = 2-1 = 1. And so on, the above feature extraction device can calculate the pixel gradient of the band image, as shown in FIG. 4c. 4a is a schematic diagram of pixel points of a band image; FIG. 4b is a schematic diagram of pixel points of the processed band image; FIG. 4c is a schematic diagram of pixel point gradient values of the band image.
S203、特征提取装置对归一化的图像和其梯度进行二值化处理,得到二值化数据。S203. The feature extraction device performs binarization processing on the normalized image and the gradient to obtain binarized data.
具体地,上述特征提取装置对上述归一化的图像R和该图像在三个方向上的梯度Rx,Ry和Rb进行二值化处理,得到二值化的图像S,Sx,Sy和SbSpecifically, the feature extraction device binarizes the normalized image R and the gradients R x , R y and R b of the image in three directions to obtain a binarized image S, S x , S y and S b .
其中,上述S为上述归一化的图像R进行二值化处理后的图像,上述Sx和Sy为上述归一化的图像R在空间维度上的梯度Ry,Rx进行二值化处理后的图像,上述Sb为上述归一化的图像在光谱维度上的梯度Rb进行二值化处理后的图像。Wherein the S image after binarizing processing is above the normalized image R, the above-described S X and S Y to above the normalized image R gradient R Y in the spatial dimension, R X is binarized In the processed image, the above S b is an image obtained by binarizing the gradient R b of the normalized image in the spectral dimension.
示例性地,上述特征提取装置通过如下公式对上述归一化的图像R进行二值化处理。该公式具体如下:Illustratively, the feature extraction device described above performs binarization processing on the normalized image R by the following formula. The formula is as follows:
Figure PCTCN2017101353-appb-000042
Figure PCTCN2017101353-appb-000042
其中,上述mean(R)为上述归一化的图像R像素值的均值,Rxyb为上述归一化的图像R的在如图3所示的空间坐标轴上坐标为(x,y,z)的像素点的像素值,上述Sxyb为上述Rxyb的二值化的值。当上述Rxyb大于mean(R)时,Sxyb等于1;当上述Rxyb小于或者等于mean(R)时,Sxyb等于0。Wherein the mean(R) is the mean of the normalized image R pixel values, and R xyb is the normalized image R on the spatial coordinate axis as shown in FIG. 3 (x, y, z The pixel value of the pixel, the above S xyb is the binarized value of the above R xyb . When R xyb is greater than mean(R), S xyb is equal to 1; when R xyb is less than or equal to mean(R), S xyb is equal to 0.
同理,上述特征提取装置通过如下公式对上述Rx进行二值化处理。该公式具体如下:Similarly, the above feature extraction device performs binarization processing on the above R x by the following formula. The formula is as follows:
Figure PCTCN2017101353-appb-000043
Figure PCTCN2017101353-appb-000043
其中,上述mean(Rx)为上述Rx像素值的均值,
Figure PCTCN2017101353-appb-000044
为上述Rx的在如图3所示的空间坐标轴上坐标为(x,y,z)的像素点的像素值,上述
Figure PCTCN2017101353-appb-000045
为上述
Figure PCTCN2017101353-appb-000046
的二值化的值。当上述
Figure PCTCN2017101353-appb-000047
大于mean(Rx)时,
Figure PCTCN2017101353-appb-000048
等于1;当上述
Figure PCTCN2017101353-appb-000049
小于或者等于mean(Rx)时,
Figure PCTCN2017101353-appb-000050
等于0。
Wherein the mean (R x ) is the mean value of the R x pixel values,
Figure PCTCN2017101353-appb-000044
Is the pixel value of the pixel of the above R x whose coordinates are (x, y, z) on the spatial coordinate axis as shown in FIG. 3,
Figure PCTCN2017101353-appb-000045
For the above
Figure PCTCN2017101353-appb-000046
The value of the binarization. When above
Figure PCTCN2017101353-appb-000047
When it is greater than mean(R x ),
Figure PCTCN2017101353-appb-000048
Equal to 1; when
Figure PCTCN2017101353-appb-000049
When less than or equal to mean(R x ),
Figure PCTCN2017101353-appb-000050
Equal to 0.
同理,上述特征提取装置通过如下公式对上述Ry进行二值化处理。该公式具体如下: Similarly, the above feature extraction device performs binarization processing on the above R y by the following formula. The formula is as follows:
Figure PCTCN2017101353-appb-000051
Figure PCTCN2017101353-appb-000051
其中,上述mean(Ry)为上述R像素值的均值,Rxyb为上述Ry的在如图3所示的空间坐标轴上坐标为(x,y,z)的像素点的像素值,上述
Figure PCTCN2017101353-appb-000052
为上述
Figure PCTCN2017101353-appb-000053
的二值化的值。当上述
Figure PCTCN2017101353-appb-000054
大于mean(Ry)时,
Figure PCTCN2017101353-appb-000055
等于1;当上述
Figure PCTCN2017101353-appb-000056
小于或者等于mean(Ry)时,
Figure PCTCN2017101353-appb-000057
等于0。
Wherein the mean(R y ) is a mean value of the R pixel values, and R xyb is a pixel value of the pixel of the R y at a coordinate of (x, y, z) on a spatial coordinate axis as shown in FIG. 3 . Above
Figure PCTCN2017101353-appb-000052
For the above
Figure PCTCN2017101353-appb-000053
The value of the binarization. When above
Figure PCTCN2017101353-appb-000054
When it is greater than mean(R y ),
Figure PCTCN2017101353-appb-000055
Equal to 1; when
Figure PCTCN2017101353-appb-000056
When less than or equal to mean(R y ),
Figure PCTCN2017101353-appb-000057
Equal to 0.
同理,上述特征提取装置通过如下公式对上述归一化的图像Rb进行二值化处理。该公式具体如下:Similarly, the above feature extraction device performs binarization processing on the normalized image R b by the following formula. The formula is as follows:
Figure PCTCN2017101353-appb-000058
Figure PCTCN2017101353-appb-000058
其中,上述mean(Rb)为上述Rb像素值的均值,
Figure PCTCN2017101353-appb-000059
为上述Rb的在如图3所示的空间坐标轴上坐标为(x,y,z)的像素点的像素值,上述
Figure PCTCN2017101353-appb-000060
为上述
Figure PCTCN2017101353-appb-000061
的二值化的值。当上述
Figure PCTCN2017101353-appb-000062
大于mean(Rb)时,
Figure PCTCN2017101353-appb-000063
等于1;当上述
Figure PCTCN2017101353-appb-000064
小于或者等于mean(Rb)时,
Figure PCTCN2017101353-appb-000065
等于0。
Wherein the mean (R b ) is the mean value of the R b pixel values,
Figure PCTCN2017101353-appb-000059
The pixel value of the pixel of the above R b having the coordinate of (x, y, z) on the spatial coordinate axis as shown in FIG. 3,
Figure PCTCN2017101353-appb-000060
For the above
Figure PCTCN2017101353-appb-000061
The value of the binarization. When above
Figure PCTCN2017101353-appb-000062
When it is greater than mean(R b ),
Figure PCTCN2017101353-appb-000063
Equal to 1; when
Figure PCTCN2017101353-appb-000064
When less than or equal to mean(R b ),
Figure PCTCN2017101353-appb-000065
Equal to 0.
S204、特征提取装置根据预设融合规则对S,Sx,Sy和Sb进行联合编码,得到编码特征图谱。S204. The feature extraction device jointly encodes S, S x , S y , and S b according to a preset fusion rule to obtain a coded feature map.
其中,上述编码特征图谱Z的编码值表示像素周围的空谱结构特性。上述融合规则为上述特征提取装置根据特征的显著性为上述S,Sx,Sy和Sb分配权值。The coded value of the above-mentioned coded feature map Z represents the spatial spectrum structure characteristic around the pixel. The above fusion rule is that the feature extraction means assigns a weight to the above S, S x , S y and S b according to the saliency of the feature.
具体地,上述S的权值为第一权值A1,上述Sx的权值为第二权值A2,上述Sy的权值为第三权值A3,上述Sb的权值为第四权值A4,由于上述原始图像(即原始高光谱遥感影像)包含了丰富的地物结构分布信息,且地物分布的空间特征比光谱特征更为明显,因此,A1>A2>A3>A4。上述联合特征图谱Z=A1S+A2Sx+A3Sy+A4Sb Specifically, the weight of the S is the first weight A1, the weight of the S x is the second weight A2, the weight of the S y is the third weight A3, and the weight of the S b is the fourth value. The weight A4, because the original image (ie, the original hyperspectral remote sensing image) contains rich distribution information of the feature structure, and the spatial characteristics of the feature distribution are more obvious than the spectral features, therefore, A1>A2>A3>A4. The above joint feature map Z=A1S+A2S x +A3S y +A4S b
举例说明,假设上述S,Sx,Sy和Sb的权值分别设置为23,22,21和20,具体编码方法表示为: For example, assume that the weights of S, S x , S y and S b are set to 2 3 , 2 2 , 2 1 and 2 0 respectively , and the specific coding method is expressed as:
Z=23S+22Sx+21Sy+20Sb Z=2 3 S+2 2 S x +2 1 S y +2 0 S b
其中,上述编码特征图谱Z将原始高光谱遥感影像(即上述原始图像R)和一阶梯度幅值(即上述Sx,Sy和Sb)融合在一起。Wherein, the above-mentioned coding feature map Z fuses the original hyperspectral remote sensing image (ie, the above-mentioned original image R) with a stepwise amplitude (ie, S x , S y and S b described above).
S205、特征提取装置对编码特征图谱进行直方图特征提取,得到3DSF特征。S205. The feature extraction device performs histogram feature extraction on the encoded feature map to obtain a 3DSF feature.
其中,上述直方图特征是在每个像素周围的一个立方体邻域V统计得到的。根据上述步骤S204中Z的计算公式可知S,Sx,Sy和Sb的取值均为0或者1,进而可知编码特征图谱Z可以有16个不同的编码值,分别为0、1、2、3、4、5、6、7、8、9、10、11、12、13、14、15,因此每个像素周围统计的直方图特征可为16维。假定
Figure PCTCN2017101353-appb-000066
表示坐标为(x,y,b)的像素的直方图特征,那么统计公式为:
Wherein, the above histogram feature is statistically obtained from a cubic neighborhood V around each pixel. According to the calculation formula of Z in the above step S204, it can be seen that the values of S, S x , S y and S b are both 0 or 1, and it can be seen that the coding feature map Z can have 16 different coding values, respectively 0, 1. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, so the histogram feature around each pixel can be 16 dimensions. assumed
Figure PCTCN2017101353-appb-000066
To represent the histogram features of pixels with coordinates (x, y, b), then the statistical formula is:
Figure PCTCN2017101353-appb-000067
Figure PCTCN2017101353-appb-000067
其中,上述Vx,Vy,Vb分别表示立方体邻域的空间维度和光谱维度,
Figure PCTCN2017101353-appb-000068
运算符表示对数据向下取整,h(i)表示3DSF编码i在立方体邻域中出现的次数。当Z(x+j,y+k,b+l)=i时,上述h(i)等于1;当Z(x+j,y+k,b+l)≠i时,上述h(i)等于0。
Wherein, the above V x , V y , V b respectively represent the spatial dimension and the spectral dimension of the cubic neighborhood,
Figure PCTCN2017101353-appb-000068
The operator indicates that the data is rounded down, and h(i) represents the number of times the 3DSF code i appears in the cube neighborhood. When Z(x+j, y+k, b+l)=i, the above h(i) is equal to 1; when Z(x+j, y+k, b+l)≠i, the above h(i) ) is equal to 0.
进一步地,上述特征提取装置获取上述每个编码在在立方体领域中出现的次数,得到16个数(即h(0)、h(1)、h(2)、…、h(14)、h(15))。上述特征提取装置将上述16个数组成一个数组,该组数可以看成上述3DSF特征。Further, the feature extraction device acquires the number of times each of the above codes appears in the cube field, and obtains 16 numbers (ie, h(0), h(1), h(2), ..., h(14), h. (15)). The feature extraction device combines the above-mentioned 16 numbers into an array, and the number of the groups can be regarded as the above-mentioned 3DSF feature.
换言之,每个像素周围统计的直方图特征为16维,而原始图像H是X*Y*B,可以看成16个(X*Y*B)的立方叠在一起,所以可以表示为X*Y*(16*B)。对于上述原始图像H来说,最终得到的3DSF特征为
Figure PCTCN2017101353-appb-000069
该特征F可直接用于后续的像素级分类。
In other words, the histogram feature around each pixel is 16 dimensions, and the original image H is X*Y*B, which can be seen as 16 (X*Y*B) cubes stacked together, so it can be expressed as X* Y*(16*B). For the above original image H, the resulting 3DSF feature is
Figure PCTCN2017101353-appb-000069
This feature F can be used directly for subsequent pixel level classification.
可以看出,在本申请实施例的方案中,首先、上述特征提取装置对原始图像进行归一化处理,得到归一化的图像;其次、上述特征提取装置根据预设梯度模板计算归一化的图像的梯度值;再次,上述特征提取装置对归一化的图像和其梯度进行二值化处理,得到二值化数据;最后,上述特征提取装置对二值 化数据进行编码,并根据编码结果获取3DSF特征。通过该方法,可以充分利用高光谱遥感影像的三维空谱结构,提高了分类性能,减小了时间和空间的复杂度。It can be seen that, in the solution of the embodiment of the present application, first, the feature extraction device normalizes the original image to obtain a normalized image; secondly, the feature extraction device calculates a normalization according to the preset gradient template. The gradient value of the image; again, the feature extraction device binarizes the normalized image and the gradient thereof to obtain binarized data; finally, the feature extraction device pairs the binary value The data is encoded and the 3DSF features are acquired based on the encoded results. By this method, the three-dimensional spatial spectrum structure of the hyperspectral remote sensing image can be fully utilized, the classification performance is improved, and the complexity of time and space is reduced.
参见图5,图5为本申请实施例提供的一种高光谱遥感图像特征提取装置结构示意图。如图5所示,该装置500包括:Referring to FIG. 5, FIG. 5 is a schematic structural diagram of a device for extracting features of a hyperspectral remote sensing image according to an embodiment of the present application. As shown in FIG. 5, the apparatus 500 includes:
归一化处理单元501,用于对原始图像H进行归一化处理,得到归一化的图像R。The normalization processing unit 501 is configured to perform normalization processing on the original image H to obtain a normalized image R.
其中,所述归一化处理单元501包括The normalization processing unit 501 includes
获取子单元5011,用于获取原始图像H的像素的均值和方差;The obtaining subunit 5011 is configured to obtain the mean and variance of the pixels of the original image H;
处理子单元5012,用于根据所述原始图像H的像素的均值和方差对所述原始图像H进行归一化处理,得到归一化的图像R。The processing sub-unit 5012 is configured to perform normalization processing on the original image H according to the mean and variance of the pixels of the original image H to obtain a normalized image R.
计算单元502,用于根据预设梯度模板计算所述归一化的图像R的梯度。The calculating unit 502 is configured to calculate a gradient of the normalized image R according to the preset gradient template.
其中,所述归一化的图像R的梯度包括所述归一化的图像R在空间维度上的梯度Rx和Ry与光谱维度上的梯度RbThe gradient of the normalized image R includes gradients R x and R y of the normalized image R in the spatial dimension and a gradient R b in the spectral dimension.
编码单元503,用于根据所述归一化的图像R的梯度进行编码,得到编码特征图谱。The coding unit 503 is configured to perform coding according to the gradient of the normalized image R to obtain a coded feature map.
其中,所述编码单元503包括:The coding unit 503 includes:
第一计算子单元5031,用于根据所述归一化的图像R及其空间维度上的梯度Rx和Ry与光谱维度上的梯度Rb,计算得到对应的二值化数据S、Sx、Sy和Sba first calculating subunit 5031, configured to calculate corresponding binarized data S, S according to the normalized image R and the gradients R x and R y in the spatial dimension and the gradient R b in the spectral dimension x , S y and S b ;
第二计算子单元5032,用于根据S、Sx、Sy、Sb和预设公式计算得到所述编码特征图谱,所述预设公式为:a second calculating subunit 5032, configured to calculate the encoded feature map according to S, S x , S y , S b and a preset formula, where the preset formula is:
编码特征图谱=23S+22Sx+21Sy+20SbCoding feature map = 2 3 S+2 2 S x +2 1 S y +2 0 S b ,
其中,23、22、21、20分别为S、Sx、Sy、Sb的权值。Among them, 2 3 , 2 2 , 2 1 , and 2 0 are the weights of S, S x , S y , and S b , respectively.
特征提取单元504,用于对所述编码特征图谱进行直方图特征提取,得到三维表面特征3DSF。The feature extraction unit 504 is configured to perform histogram feature extraction on the encoded feature map to obtain a three-dimensional surface feature 3DSF.
其中,所述特征提取单元504用于:The feature extraction unit 504 is configured to:
根据统计公式获取由所述S、Sx、Sy、Sb组成的编码值i在每个像素周围 的立方体邻域中出现的次数;Obtaining, according to a statistical formula, the number of occurrences of the encoded value i composed of the S, S x , S y , and S b in a cubic neighborhood around each pixel;
所述统计公式为:The statistical formula is:
Figure PCTCN2017101353-appb-000070
Figure PCTCN2017101353-appb-000070
其中,所述
Figure PCTCN2017101353-appb-000071
表示坐标为(x,y,b)的像素的直方图特征,所述Vx,Vy,Vb分别表示立方体邻域的空间维度和光谱维度,
Figure PCTCN2017101353-appb-000072
运算符表示对数据向下取整,h(i)表示所述编码值i在立方体邻域中出现的次数,由所述次数组成的数组即为所述3DSF。
Wherein said
Figure PCTCN2017101353-appb-000071
a histogram feature representing pixels of coordinates (x, y, b), the V x , V y , V b representing the spatial and spectral dimensions of the cube neighborhood, respectively,
Figure PCTCN2017101353-appb-000072
The operator indicates that the data is rounded down, h(i) represents the number of times the encoded value i appears in the cube neighborhood, and the array consisting of the number of times is the 3DSF.
在本实施例中,高光谱遥感图像特征提取装置500是以单元(归一化处理单元501、计算单元502、编码单元503和特征提取单元504)的形式来呈现。这里的“单元”可以指特定应用集成电路(application-specific integrated circuit,ASIC),执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。In the present embodiment, the hyperspectral remote sensing image feature extraction device 500 is presented in the form of a unit (normalization processing unit 501, calculation unit 502, encoding unit 503, and feature extraction unit 504). A "unit" herein may refer to an application-specific integrated circuit (ASIC), a processor and memory that executes one or more software or firmware programs, integrated logic circuits, and/or other devices that provide the functionality described above. .
可以理解的是,本实施例的高光谱遥感图像特征提取装置500的各功能单元的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。It can be understood that the functions of the functional units of the hyperspectral remote sensing image feature extraction device 500 of the present embodiment can be specifically implemented according to the method in the foregoing method embodiments. For the specific implementation process, reference may be made to the related description of the foregoing method embodiments. I won't go into details here.
参见图6,图6是本申请实施例提供的一种高光谱遥感图像特征提取装置的结构示意图,用于实现本申请实施例公开的高光谱遥感图像特征提取方法。其中,该高光谱遥感图像特征提取装置600可以包括:至少一个总线601、与总线601相连的至少一个处理器602以及与总线601相连的至少一个存储器603。Referring to FIG. 6, FIG. 6 is a schematic structural diagram of a hyperspectral remote sensing image feature extraction apparatus according to an embodiment of the present application, which is used to implement the feature extraction method of hyperspectral remote sensing image disclosed in the embodiment of the present application. The hyperspectral remote sensing image feature extraction device 600 may include at least one bus 601, at least one processor 602 connected to the bus 601, and at least one memory 603 connected to the bus 601.
其中,处理器602通过总线601,调用存储器中存储的代码以用于对原始图像H进行归一化处理,得到归一化的图像R;根据预设梯度模板计算所述归一化的图像R的梯度;根据所述归一化的图像R的梯度进行编码,得到编码特征图谱;对所述编码特征图谱进行直方图特征提取,得到三维表面特征3DSF。The processor 602 calls the code stored in the memory through the bus 601 for normalizing the original image H to obtain a normalized image R; and calculating the normalized image R according to the preset gradient template. a gradient; encoding according to the gradient of the normalized image R to obtain a coded feature map; performing histogram feature extraction on the coded feature map to obtain a three-dimensional surface feature 3DSF.
在本实施例中,高光谱遥感图像特征提取装置600是以单元的形式来呈现。这里的“单元”可以指特定应用集成电路(application-specific integrated  circuit,ASIC),执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。In the present embodiment, the hyperspectral remote sensing image feature extraction device 600 is presented in the form of a unit. The "unit" herein may refer to an application-specific integrated application (application-specific integrated Circuit, ASIC), a processor and memory that executes one or more software or firmware programs, integrated logic circuits, and/or other devices that provide the functions described above.
可以理解的是,本实施例的高光谱遥感图像特征提取装置600的各功能单元的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。It can be understood that the functions of the functional units of the hyperspectral remote sensing image feature extraction device 600 of the present embodiment can be specifically implemented according to the method in the foregoing method embodiments. For the specific implementation process, reference may be made to the related description of the foregoing method embodiments. I won't go into details here.
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任何一种高光谱遥感图像特征提取方法的部分或全部步骤。The embodiment of the present application further provides a computer storage medium, wherein the computer storage medium may store a program, where the program includes some or all of the steps of the hyperspectral remote sensing image feature extraction method described in the foregoing method embodiments. .
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。It should be noted that, for the foregoing method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should understand that the present application is not limited by the described action sequence. Because certain steps may be performed in other sequences or concurrently in accordance with the present application. In the following, those skilled in the art should also understand that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by the present application.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above embodiments, the descriptions of the various embodiments are different, and the details that are not detailed in a certain embodiment can be referred to the related descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本申请的各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单 元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or a software function list. The realization of the form of the yuan.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application. The foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。 The above embodiments are only used to explain the technical solutions of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that they can still The technical solutions described in the embodiments are modified, or equivalent to some of the technical features are replaced; and the modifications or substitutions do not deviate from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

  1. 一种高光谱遥感图像的特征提取方法,其特征在于,所述方法包括:A feature extraction method for a hyperspectral remote sensing image, characterized in that the method comprises:
    对原始图像H进行归一化处理,得到归一化的图像R;Normalizing the original image H to obtain a normalized image R;
    根据预设梯度模板计算所述归一化的图像R的梯度;Calculating a gradient of the normalized image R according to a preset gradient template;
    根据所述归一化的图像R的梯度进行编码,得到编码特征图谱;Encoding according to the gradient of the normalized image R to obtain a coded feature map;
    对所述编码特征图谱进行直方图特征提取,得到三维表面特征3DSF。Histogram feature extraction is performed on the coded feature map to obtain a three-dimensional surface feature 3DSF.
  2. 根据权利要求1所述的方法,其特征在于,所述对原始图像H进行归一化处理,得到归一化的图像R包括:The method according to claim 1, wherein the normalizing the original image H to obtain a normalized image R comprises:
    获取原始图像H的像素的均值和方差;Obtaining the mean and variance of the pixels of the original image H;
    根据所述原始图像H的像素的均值和方差对所述原始图像H进行归一化处理,得到归一化的图像R。The original image H is normalized according to the mean and variance of the pixels of the original image H to obtain a normalized image R.
  3. 根据权利要求1所述的方法,其特征在于,所述归一化的图像R的梯度包括所述归一化的图像R在空间维度上的梯度Rx和Ry与光谱维度上的梯度RbThe method according to claim 1, characterized in that the gradient of the normalized image R comprises gradients R x and R y of the normalized image R in the spatial dimension and a gradient R in the spectral dimension b .
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述归一化的图像R的梯度进行编码,得到编码特征图谱,包括:The method according to claim 3, wherein the encoding according to the gradient of the normalized image R to obtain a coded feature map comprises:
    根据所述归一化的图像R及其空间维度上的梯度Rx和Ry与光谱维度上的梯度Rb,计算得到对应的二值化数据S、Sx、Sy和SbCalculating corresponding binarized data S, S x , S y and S b according to the normalized image R and the gradients R x and R y in the spatial dimension and the gradient R b in the spectral dimension;
    根据S、Sx、Sy、Sb和预设公式计算得到所述编码特征图谱,所述预设公式为:Calculating the coded feature map according to S, S x , S y , S b and a preset formula, the preset formula is:
    编码特征图谱=23S+22Sx+21Sy+20SbCoding feature map = 2 3 S+2 2 S x +2 1 S y +2 0 S b ,
    其中,23、22、21、20分别为S、Sx、Sy、Sb的权值。Among them, 2 3 , 2 2 , 2 1 , and 2 0 are the weights of S, S x , S y , and S b , respectively.
  5. 根据权利要求4所述的方法,其特征在于,所述对所述编码特征图谱进行直方图特征提取,得到三维表面特征3DSF,包括:The method according to claim 4, wherein the performing a histogram feature extraction on the encoded feature map to obtain a three-dimensional surface feature 3DSF comprises:
    根据统计公式获取由所述S、Sx、Sy、Sb组成的编码值i在每个像素周围的立方体邻域中出现的次数;Obtaining, according to a statistical formula, the number of occurrences of the encoded value i composed of the S, S x , S y , and S b in a cubic neighborhood around each pixel;
    所述统计公式为: The statistical formula is:
    Figure PCTCN2017101353-appb-100001
    Figure PCTCN2017101353-appb-100001
    其中,所述
    Figure PCTCN2017101353-appb-100002
    表示坐标为(x,y,b)的像素的直方图特征,所述Vx,Vy,Vb分别表示立方体邻域的空间维度和光谱维度,
    Figure PCTCN2017101353-appb-100003
    运算符表示对数据向下取整,h(i)表示所述编码值i在立方体邻域中出现的次数,由所述次数组成的数组即为所述3DSF。
    Wherein said
    Figure PCTCN2017101353-appb-100002
    a histogram feature representing pixels of coordinates (x, y, b), the V x , V y , V b representing the spatial and spectral dimensions of the cube neighborhood, respectively,
    Figure PCTCN2017101353-appb-100003
    The operator indicates that the data is rounded down, h(i) represents the number of times the encoded value i appears in the cube neighborhood, and the array consisting of the number of times is the 3DSF.
  6. 一种高光谱遥感图像特征提取装置,其特征在于,包括:A device for extracting features of a hyperspectral remote sensing image, comprising:
    归一化处理单元,用于对原始图像H进行归一化处理,得到归一化的图像R;a normalization processing unit for normalizing the original image H to obtain a normalized image R;
    计算单元,用于根据预设梯度模板计算所述归一化的图像R的梯度;a calculating unit, configured to calculate a gradient of the normalized image R according to a preset gradient template;
    编码单元,用于根据所述归一化的图像R的梯度进行编码,得到编码特征图谱;a coding unit, configured to perform coding according to the gradient of the normalized image R to obtain a coded feature map;
    特征提取单元,用于对所述编码特征图谱进行直方图特征提取,得到三维表面特征3DSF。And a feature extraction unit, configured to perform histogram feature extraction on the encoded feature map to obtain a three-dimensional surface feature 3DSF.
  7. 根据权利要求6所述的装置,其特征在于,所述归一化处理单元包括The apparatus of claim 6 wherein said normalization processing unit comprises
    获取子单元,用于获取原始图像H的像素的均值和方差;Obtaining a subunit for obtaining a mean and a variance of pixels of the original image H;
    处理子单元,用于根据所述原始图像H的像素的均值和方差对所述原始图像H进行归一化处理,得到归一化的图像R。Processing subunits for normalizing the original image H according to the mean and variance of the pixels of the original image H to obtain a normalized image R.
  8. 根据权利要求7所述的装置,其特征在于,所述归一化的图像R的梯度包括所述归一化的图像R在空间维度上的梯度Rx和Ry与光谱维度上的梯度RbThe apparatus according to claim 7, wherein the gradient of the normalized image R comprises gradients R x and R y of the normalized image R in the spatial dimension and a gradient R in the spectral dimension b .
  9. 根据权利要求8所述的装置,其特征在于,所述编码单元包括:The apparatus according to claim 8, wherein said coding unit comprises:
    第一计算子单元,用于根据所述归一化的图像R及其空间维度上的梯度Rx和Ry与光谱维度上的梯度Rb,计算得到对应的二值化数据S、Sx、Sy和Sba first calculating subunit, configured to calculate corresponding binarized data S, S x according to the normalized image R and the gradients R x and R y in the spatial dimension and the gradient R b in the spectral dimension , S y and S b ;
    第二计算子单元,用于根据S、Sx、Sy、Sb和预设公式计算得到所述编码特征图谱,所述预设公式为:a second calculating subunit, configured to calculate the encoded feature map according to S, S x , S y , S b and a preset formula, where the preset formula is:
    编码特征图谱=23S+22Sx+21Sy+20SbCoding feature map = 2 3 S+2 2 S x +2 1 S y +2 0 S b ,
    其中,23、22、21、20分别为S、Sx、Sy、Sb的权值。 Among them, 2 3 , 2 2 , 2 1 , and 2 0 are the weights of S, S x , S y , and S b , respectively.
  10. 根据权利要求9所述的装置,其特征在于,所述特征提取单元用于:The apparatus according to claim 9, wherein said feature extraction unit is configured to:
    根据统计公式获取由所述S、Sx、Sy、Sb组成的编码值i在每个像素周围的立方体邻域中出现的次数;Obtaining, according to a statistical formula, the number of occurrences of the encoded value i composed of the S, S x , S y , and S b in a cubic neighborhood around each pixel;
    所述统计公式为:The statistical formula is:
    Figure PCTCN2017101353-appb-100004
    Figure PCTCN2017101353-appb-100004
    其中,所述
    Figure PCTCN2017101353-appb-100005
    表示坐标为(x,y,b)的像素的直方图特征,所述Vx,Vy,Vb分别表示立方体邻域的空间维度和光谱维度,
    Figure PCTCN2017101353-appb-100006
    运算符表示对数据向下取整,h(i)表示所述编码值i在立方体邻域中出现的次数,由所述次数组成的数组即为所述3DSF。
    Wherein said
    Figure PCTCN2017101353-appb-100005
    a histogram feature representing pixels of coordinates (x, y, b), the V x , V y , V b representing the spatial and spectral dimensions of the cube neighborhood, respectively,
    Figure PCTCN2017101353-appb-100006
    The operator indicates that the data is rounded down, h(i) represents the number of times the encoded value i appears in the cube neighborhood, and the array consisting of the number of times is the 3DSF.
PCT/CN2017/101353 2017-09-07 2017-09-12 Feature extraction method and device for hyperspectral remotely sensed image WO2019047248A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710800249.4A CN107633216B (en) 2017-09-07 2017-09-07 Three-dimensional surface space spectrum combined feature coding method and device for hyperspectral remote sensing image
CN201710800249.4 2017-09-07

Publications (1)

Publication Number Publication Date
WO2019047248A1 true WO2019047248A1 (en) 2019-03-14

Family

ID=61100031

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/101353 WO2019047248A1 (en) 2017-09-07 2017-09-12 Feature extraction method and device for hyperspectral remotely sensed image

Country Status (2)

Country Link
CN (1) CN107633216B (en)
WO (1) WO2019047248A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428627A (en) * 2020-03-23 2020-07-17 西北大学 Mountain landform remote sensing extraction method and system
CN111832575A (en) * 2020-07-16 2020-10-27 黄河勘测规划设计研究院有限公司 Water surface area extraction method and device based on remote sensing image
CN112766409A (en) * 2021-02-01 2021-05-07 西北工业大学 Feature fusion method for remote sensing image target detection
CN113160076A (en) * 2021-04-06 2021-07-23 中航航空电子有限公司 Ground object infrared target acquisition method based on target edge neighborhood information
CN113420640A (en) * 2021-06-21 2021-09-21 深圳大学 Mangrove hyperspectral image classification method and device, electronic equipment and storage medium
CN113657199A (en) * 2021-07-28 2021-11-16 西安理工大学 Hyperspectral image anomaly detection method based on space-spectrum extraction
CN117726915A (en) * 2024-02-07 2024-03-19 南方海洋科学与工程广东省实验室(广州) Remote sensing data spatial spectrum fusion method and device, storage medium and terminal

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596213A (en) * 2018-04-03 2018-09-28 中国地质大学(武汉) A kind of Classification of hyperspectral remote sensing image method and system based on convolutional neural networks
CN109360264B (en) * 2018-08-30 2023-05-26 深圳大学 Method and device for establishing unified image model
CN111753834B (en) * 2019-03-29 2024-03-26 中国水利水电科学研究院 Planting land block structure semantic segmentation method and device based on deep neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110163163A1 (en) * 2004-06-01 2011-07-07 Lumidigm, Inc. Multispectral barcode imaging
US8659656B1 (en) * 2010-10-12 2014-02-25 The Boeing Company Hyperspectral imaging unmixing
CN103927756A (en) * 2014-04-28 2014-07-16 中国国土资源航空物探遥感中心 Spectral characteristic index extraction method based on spectral characteristic space centralization
CN107122733A (en) * 2017-04-25 2017-09-01 西安电子科技大学 Hyperspectral image classification method based on NSCT and SAE

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8155391B1 (en) * 2006-05-02 2012-04-10 Geoeye Solutions, Inc. Semi-automatic extraction of linear features from image data
CN106469316B (en) * 2016-09-07 2020-02-21 深圳大学 Hyperspectral image classification method and system based on superpixel-level information fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110163163A1 (en) * 2004-06-01 2011-07-07 Lumidigm, Inc. Multispectral barcode imaging
US8659656B1 (en) * 2010-10-12 2014-02-25 The Boeing Company Hyperspectral imaging unmixing
CN103927756A (en) * 2014-04-28 2014-07-16 中国国土资源航空物探遥感中心 Spectral characteristic index extraction method based on spectral characteristic space centralization
CN107122733A (en) * 2017-04-25 2017-09-01 西安电子科技大学 Hyperspectral image classification method based on NSCT and SAE

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428627A (en) * 2020-03-23 2020-07-17 西北大学 Mountain landform remote sensing extraction method and system
CN111428627B (en) * 2020-03-23 2023-03-24 西北大学 Mountain landform remote sensing extraction method and system
CN111832575A (en) * 2020-07-16 2020-10-27 黄河勘测规划设计研究院有限公司 Water surface area extraction method and device based on remote sensing image
CN111832575B (en) * 2020-07-16 2024-04-30 黄河勘测规划设计研究院有限公司 Remote sensing image-based water surface area extraction method and device
CN112766409A (en) * 2021-02-01 2021-05-07 西北工业大学 Feature fusion method for remote sensing image target detection
CN113160076A (en) * 2021-04-06 2021-07-23 中航航空电子有限公司 Ground object infrared target acquisition method based on target edge neighborhood information
WO2022267388A1 (en) * 2021-06-21 2022-12-29 深圳大学 Mangrove hyperspectral image classification method and apparatus, and electronic device and storage medium
CN113420640B (en) * 2021-06-21 2023-06-20 深圳大学 Mangrove hyperspectral image classification method and device, electronic equipment and storage medium
CN113420640A (en) * 2021-06-21 2021-09-21 深圳大学 Mangrove hyperspectral image classification method and device, electronic equipment and storage medium
CN113657199A (en) * 2021-07-28 2021-11-16 西安理工大学 Hyperspectral image anomaly detection method based on space-spectrum extraction
CN113657199B (en) * 2021-07-28 2023-09-15 西安理工大学 Hyperspectral image anomaly detection method based on space-spectrum extraction
CN117726915A (en) * 2024-02-07 2024-03-19 南方海洋科学与工程广东省实验室(广州) Remote sensing data spatial spectrum fusion method and device, storage medium and terminal
CN117726915B (en) * 2024-02-07 2024-05-28 南方海洋科学与工程广东省实验室(广州) Remote sensing data spatial spectrum fusion method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN107633216B (en) 2021-02-23
CN107633216A (en) 2018-01-26

Similar Documents

Publication Publication Date Title
WO2019047248A1 (en) Feature extraction method and device for hyperspectral remotely sensed image
Liang et al. Material based salient object detection from hyperspectral images
US9633282B2 (en) Cross-trained convolutional neural networks using multimodal images
WO2018081929A1 (en) Hyperspectral remote sensing image feature extraction and classification method and system thereof
CN107346409B (en) pedestrian re-identification method and device
Sirmacek et al. Urban-area and building detection using SIFT keypoints and graph theory
Krithika et al. An individual grape leaf disease identification using leaf skeletons and KNN classification
Zhuo et al. Cloud classification of ground-based images using texture–structure features
CN113033465B (en) Living body detection model training method, device, equipment and storage medium
WO2016150240A1 (en) Identity authentication method and apparatus
CN108549836B (en) Photo copying detection method, device, equipment and readable storage medium
Suresh et al. Image texture classification using gray level co-occurrence matrix based statistical features
WO2018192023A1 (en) Method and device for hyperspectral remote sensing image classification
JP6341650B2 (en) Image processing apparatus, image processing method, and program
WO2020024744A1 (en) Image feature point detecting method, terminal device, and storage medium
JP2014531097A (en) Text detection using multi-layer connected components with histograms
CN110766708A (en) Image comparison method based on contour similarity
Baig et al. Im2depth: Scalable exemplar based depth transfer
Chen et al. Semantic segmentation of aerial imagery via multi-scale shuffling convolutional neural networks with deep supervision
Agarwal et al. MagNet: Detecting digital presentation attacks on face recognition
Deng et al. Attention-aware dual-stream network for multimodal face anti-spoofing
JP6785181B2 (en) Object recognition device, object recognition system, and object recognition method
Hu et al. Structure destruction and content combination for generalizable anti-spoofing
WO2019100348A1 (en) Image retrieval method and device, and image library generation method and device
Ye et al. Fast and robust structure-based multimodal geospatial image matching

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17924680

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17924680

Country of ref document: EP

Kind code of ref document: A1