CN111898662A - Coastal wetland deep learning classification method, device, equipment and storage medium - Google Patents

Coastal wetland deep learning classification method, device, equipment and storage medium Download PDF

Info

Publication number
CN111898662A
CN111898662A CN202010701215.1A CN202010701215A CN111898662A CN 111898662 A CN111898662 A CN 111898662A CN 202010701215 A CN202010701215 A CN 202010701215A CN 111898662 A CN111898662 A CN 111898662A
Authority
CN
China
Prior art keywords
data
processed
hyperspectral image
image data
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010701215.1A
Other languages
Chinese (zh)
Other versions
CN111898662B (en
Inventor
陶然
李伟
赵旭东
张蒙蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202010701215.1A priority Critical patent/CN111898662B/en
Publication of CN111898662A publication Critical patent/CN111898662A/en
Application granted granted Critical
Publication of CN111898662B publication Critical patent/CN111898662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1793Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method, a device, equipment and a storage medium for deep learning and classification of coastal wetlands, wherein the method comprises the following steps: correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain lidar data to be processed; constructing three layers of Octave convolution layers of each mode; based on the three-layer Octave convolution layer of each mode, performing component separation, component combination and frequency component synthesis on hyperspectral image data to be processed and laser radar data to be processed to obtain feature fusion data; directional texture information in the feature fusion data is extracted, and spatial, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category. The combined ground feature classification performance under different resolutions and different modes is improved; and high-precision collaborative classification is realized.

Description

Coastal wetland deep learning classification method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of multi-sensor remote sensing combined classification, in particular to a coastal wetland deep learning classification method, a device, equipment and a storage medium.
Background
The wetland is positioned in an land-water transition zone and is an ecosystem which is the richest in biological diversity and the highest in productivity in the nature. The research on the wetland, particularly the coastal wetland, has important significance for protecting the ecological environment and maintaining the healthy development of human production and life. In recent years, coastal wetland ecosystems in China are damaged to different degrees, so that more urgent requirements on wetland high-precision dynamic monitoring, fine classification and protection are brought forward. The remote sensing technology is an important means for coastal wetland dynamic monitoring and information extraction and interpretation by virtue of the advantages of economy, high efficiency, wide coverage range and the like. The combination of diversified spectrum, radar imaging technology and image processing technology provides convenient high-quality data for a spatial and geographic database.
The high-dimensional data represented by the hyperspectral image can synchronously acquire information in various aspects such as space, spectrum, radiation and the like of an observed object, and the description of an objective world is prompted to present new characteristics of multiple scales, multiple angles and multiple dimensions. Lidar data provides elevation information for the area under investigation, which is valuable to better describe the same scene that the light sensors acquired individually. The integration and processing of the different data sources are beneficial to integrating different information, and the performance of earth observation is further improved. The fusion of multi-source image data is to extract obvious features from each source image and then fuse these features into a single image by a suitable fusion method. Many signal processing methods such as a multi-scale decomposition method have been applied to a multi-source remote sensing image fusion task to extract the salient features of the image. And after the image decomposition method is utilized to extract the salient features of the image, a proper fusion strategy is used to obtain a final fusion image. The fused hyperspectral image has high spatial resolution and rich spectral information characteristics, and creates better conditions for in-depth research, but the hyperspectral image and spectrum integration and mass data are difficult to mark, so that the hyperspectral image fusion processing by using a conventional method is difficult.
Disclosure of Invention
In view of the above, a method, an apparatus, a device and a storage medium for classifying coastal wetlands in a deep learning manner are provided to solve the problems of high cost and low accuracy of a hyperspectral image fusion technology in classification of coastal wetlands in the related art.
The invention adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for deep learning and classification of a coastal wetland, where the method includes:
correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain lidar data to be processed;
constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed;
based on the three Octave convolutional layers of each mode, performing component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed to obtain feature fusion data;
directional texture information in the feature fusion data is extracted, and spatial, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
In a second aspect, an embodiment of the present application provides a coastal wetland deep learning classification device, which includes:
the preprocessing module is used for correcting and normalizing the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and performing exception point removal processing and normalization processing on the acquired original lidar data to obtain lidar data to be processed;
the convolutional layer construction module is used for constructing three layers of Octave convolutional layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed;
the data fusion module is used for carrying out component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed based on the three Octave convolutional layers of each mode to obtain feature fusion data;
and the classification module is used for extracting directional texture information in the feature fusion data, and performing space, texture and spectrum combined classification by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
In a third aspect, an embodiment of the present application provides an apparatus, including:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program, and the computer program is at least used for executing the coastal wetland deep learning classification method in the first aspect of the embodiment of the application;
the processor is used for calling and executing the computer program in the memory.
In a fourth aspect, the present application provides a storage medium, where the storage medium stores a computer program, and the computer program, when executed by a processor, implements the steps in the method for classifying coastal wetland deep learning according to the first aspect.
By adopting the technical scheme, the hyperspectral radar and laser radar combined classification can effectively combine and extract a plurality of fractional dimensional features of different sensor data, further comprehensively utilize space, spectrum and texture features, fully mine and utilize the integrity and reliability of multisource data, improve the combined ground object classification performance under different resolutions and different modes, and realize high-precision cooperative classification.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for classifying coastal wetland deep learning according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a coastal wetland deep learning classification device provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
First, applicable scenarios of the embodiments of the present application will be described. With the rise of deep learning, researchers at home and abroad propose a plurality of fusion methods based on deep learning. In the fusion method in which the convolutional neural network is used to acquire image features and reconstruct a fusion image, only the result of the last layer is used as the image features, and this operation may lose a large amount of useful information obtained by the middle layer. Therefore, how to acquire multi-dimensional features from different convolutional layers to extract image features becomes a key problem. It is difficult to sufficiently extract features of different dimensions and different directions only by using spatial and spectral information of hyperspectral images and lidar data.
In addition, deep learning model training is based on data driving, and lack of fusion data results in failure to guarantee sufficient training of the model. Therefore, how to adequately train the model in the absence of the fusion data source is a problem to be solved by the embodiment of the application. The hyperspectral image can be used for identifying and detecting ground targets due to the characteristics of high spectral resolution, narrow bandwidth and large information amount, and has strong diagnostic capability. However, the hyperspectral image is affected by the time of day and the weather and it is difficult to produce a high quality image. While radar images provide more accurate elevation information and useful spatial contrast. By combining the visible light image with higher spatial resolution, the image result with high spectral resolution, high spatial resolution and all-weather characteristics all day can be obtained. Therefore, the method has important research significance for the fusion and multi-dimensional feature extraction of multi-source remote sensing data collected by a plurality of sensors. Therefore, the embodiment of the application provides a coastal wetland deep learning classification method.
Examples
Fig. 1 is a flowchart of a method for classifying the coastal wetland deep learning according to an embodiment of the present invention, where the method may be performed by the apparatus for classifying the coastal wetland deep learning according to an embodiment of the present invention, and the apparatus may be implemented by software and/or hardware. Referring to fig. 1, the method may specifically include the following steps:
s101, correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain the lidar data to be processed.
For example, the processing of the raw hyperspectral image data may include: carrying out geometric correction processing and radiation correction processing on the acquired original hyperspectral image data to obtain three-dimensional original hyperspectral image data; and carrying out normalization processing on the spectral reflectance value of the original hyperspectral image in the three-dimensional form to obtain hyperspectral image data to be processed.
The correction process may include a geometric correction process and a radiation correction process, and specifically, the raw hyperspectral image data after the correction process is recorded as XHSIIs RH*CH*BHThree-dimensional cube data of size, RH、CH、BHRespectively the number of rows and columns and the number of spectral channels of the hyperspectral image data, and carrying out normalization processing on the spectral reflectance values of the hyperspectral image data. In one specific example, R of three-dimensional cube dataH、CH、BHThe size may be 324 x 220 x 64.
Optionally, the processing process of the original lidar data may specifically include: the difference is made between a digital surface model and a digital elevation model in original laser radar data to obtain a normalized digital surface model so as to remove abnormal points; and respectively carrying out amplitude normalization processing on each wave band by using three-wave band laser radar intensity map data in the original laser radar data to obtain laser radar data to be processed.
In particular, raw lidar data XLiDARIncluding a size RL*CLThe Digital Surface Model (DSM) image, the Digital Elevation Model (DEM), the normalized Digital Surface Model (nDSM) and the three-waveband laser radar intensity map data respectively carry out amplitude Normalization processing and abnormal point removal on each wavebandL、CLThe number of rows and the number of columns of the laser radar data are respectively, and the normalized digital surface model is obtained by subtracting the digital elevation model from the digital surface model. In a specific example, RL、CLThe value may be 324 x 220.
nDSM=DSM-DEM
S102, constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed.
Specifically, based on spatial resolution and channel number of hyperspectral image data to be processed and laser radar data to be processed, linear scale representation of an input channel is obtained first, and then three layers of Octave convolutional layers in different modes are constructed. The Octave is a programming language, and the Octave convolutional layer is a convolution form which separates high-frequency information from low-frequency information in convolution, compresses the number of model parameters and improves the test precision.
S103, based on the three Octave convolution layers of each mode, performing component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the laser radar data to be processed to obtain feature fusion data.
Specifically, in a specific example, in an input layer of three Octave convolution layers, high-frequency components and low-frequency components of a two-dimensional image are separated, wherein the frequency of the high-frequency components is greater than a set frequency threshold value, and the frequency of the low-frequency components is less than the set frequency threshold value; combining high-frequency components and low-frequency components of each mode aiming at hyperspectral image data and laser radar data of each spatial resolution in an intermediate layer of the three-layer Octave convolution layer; and (4) performing frequency component synthesis on each high-frequency component and low-frequency component with different resolutions and different frequencies on the output layers of the three Octave convolution layers to obtain feature fusion data. The feature fusion data may be high-spatial and spectral resolution feature fusion data combining hyperspectral data information and lidar data information.
S103 will be described below with a specific example.
Obtaining an input hyperspectral image XHSIAnd lidar image XLiDARThe linear scale representation is performed to separate the feature tensor of the image at the Octave input layer into low and high frequency components. With XHSIFor example, the high frequency component is an original image without gaussian filtering, and the low frequency component is an image obtained by gaussian filtering. Since the low frequency components of the image are usually redundant, we set the length and width of each channel of the low frequency components to be 0.5 times the length and width of the channel of the high frequency components. For the original hyperspectral image
Figure BDA0002591279180000071
According to the low-frequency channel proportion alpha epsilon [0,1]Is decomposed into
Figure BDA0002591279180000072
In which the high frequency component
Figure BDA0002591279180000073
Acquiring detail information of an image, wherein the low-frequency component is
Figure BDA0002591279180000074
Obtaining high-frequency component of laser radar image by the same method
Figure BDA0002591279180000075
And low frequency components
Figure BDA0002591279180000076
In one particular example, α may take 0.75.
Applications of
Figure BDA0002591279180000077
The low frequency component and the high frequency component of the convolution output as input to the second Octave convolution layer are
Figure BDA0002591279180000078
In convolution operations, the kernel of the convolution
Figure BDA0002591279180000079
For obtaining
Figure BDA00025912791800000710
Corresponding to
Figure BDA00025912791800000711
Are respectively composed of
Figure BDA00025912791800000712
And obtaining by convolution with the input.
In particular, with low frequency output of hyperspectral images
Figure BDA00025912791800000713
For example, the following steps are carried out:
Figure BDA00025912791800000714
i.e. the output is divided by the low frequency partConvolution and high-frequency partial convolution, for calculation
Figure BDA00025912791800000715
Initializing convolution kernels
Figure BDA00025912791800000716
And respectively convolving the low-frequency part convolution and the high-frequency part convolution with the corresponding parts of the input data, firstly carrying out downsampling on the high-frequency part of the input image, and outputting:
Figure BDA00025912791800000717
where (pq, table) denotes the coordinates, k is the convolution kernel size,
Figure BDA00025912791800000718
is a convolution neighborhood. Up-sampling the low frequency part of the input image and outputting as
Figure BDA00025912791800000719
The same can be obtained
Figure BDA0002591279180000081
Optionally, in the output layer of the three Octave convolution layers, frequency component synthesis is performed on each high-frequency component and low-frequency component with different resolutions and different frequencies to obtain feature fusion data, which may specifically be implemented in the following manner: integrating high-frequency data of the hyperspectral image data and low-frequency data of the laser radar image data to obtain first fusion layer data; and integrating the data of the first fusion layer and the high-frequency data of the laser radar image to obtain data of a second fusion layer as feature fusion data.
Output of convolution with Octave
Figure BDA0002591279180000082
Integrating high-frequency and low-quality components of image, and setting outputThe low-frequency channel proportion alpha is 0, and the output combined high-frequency component Y is obtainedMerge. Specifically, according to the spatial resolution difference between the hyperspectral image and the lidar image (the spatial resolution of the hyperspectral image is lower than that of the lidar image under normal conditions), the high-frequency information of the first layer comprehensive hyperspectral image
Figure BDA0002591279180000083
Low frequency information with lidar
Figure BDA0002591279180000084
Y for obtaining a first fused layerMerge1Second layer of integrated fusion layer YMerge1Combined high frequency component Y output from high frequency information of laser radarMerge2When the size of the fused feature is RM*CM*BMThe method has the characteristics of high spatial resolution and high spectral resolution.
S104, extracting directional texture information in the feature fusion data, and performing space, texture and spectrum combined classification by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
Specifically, a multi-fractional-dimension Gabor full-convolution network is designed on the basis of feature fusion data, and three layers of fractional-dimension Gabor filters with different fractional orders are designed to extract spatial directional texture features of a fusion image. In each fractional Gabor convolution layer, fractional Gabor filtering is carried out on an original image signal to extract directional texture information; and comprehensively weighting three layers of fractional Gabor characteristics of different fractional orders and combining the spectral characteristics of the hyperspectral image to perform spatial texture-spectrum combined classification to obtain a final hyperspectral and lidar combined classification map. The two-dimensional fractional domain Gabor filter is formed by multiplying a Gaussian function and a sinusoidal plane wave.
In the embodiment of the application, the hyperspectral and laser radar combined classification can effectively combine and extract a plurality of fractional dimensional features of different sensor data, so that space, spectrum and texture features are comprehensively utilized, the integrity and reliability of multisource data are fully excavated and utilized, the combined ground object classification performance under different resolutions and different modes is improved, and high-precision cooperative classification is realized.
The following is a detailed description:
(1) the basic concept of a two-dimensional Gabor filter is explained first.
Figure BDA0002591279180000091
Figure BDA0002591279180000092
Figure BDA0002591279180000093
Wherein f is a variable in a frequency domain, m and n are the size of a Gabor filter, theta is an angle between a Gaussian function and a plane wave function, alpha and beta are scale coefficients of the Gaussian function in two directions, and corresponding gamma and eta are scale coefficients of the Gaussian function in two directions in the frequency domain.
Traditional frequency domain filtering is global filtering, which can obtain the whole frequency spectrum of a signal, but cannot be used for effectively processing non-stationary signals and abrupt textures. In order to overcome the limitation of the traditional frequency domain filtering in two-dimensional signal processing and better analyze the local characteristics of signals, the invention combines fractional Fourier transform and Gabor filtering to improve the directional texture feature extraction of image data. The two-dimensional fractional fourier transform kernel function is: kpx,py(x, y, u, v), wherein (x, y) is a space domain variable, and (u, v) is a fractional domain variable, when fractional Gabor filtering of 2-dimensional signals is carried out by utilizing the property of a kernel function and the separability of a Gaussian window function, fractional Gabor transformation can be carried out along one direction firstly, then Gabor transformation is carried out along the other direction to complete two-dimensional fractional Gabor transformation, and transformation kernel decomposition is Kp(x, u) and Kp(y, v), wherein:
Figure BDA0002591279180000094
according to the principle, the two-dimensional fractional domain Gabor filter applied in the embodiment of the present application can be obtained by combining Gabor filtering as follows:
Figure BDA0002591279180000101
Figure BDA0002591279180000102
Figure BDA0002591279180000103
optionally, a score domain Gabor convolution layer is designed according to the two-dimensional score domain Gabor filter; applying a fractional domain Gabor convolution layer, and performing full convolution operation on hyperspectral image data to be processed and laser radar data to be processed; setting a spectrum convolution layer, and summing results of the fractional domain Gabor convolution layer by applying the spectrum convolution layer to obtain the spectrum characteristics of the fused data; taking the weighted sum of the directional texture information of the fusion data and the spectral feature of the fusion data as a joint feature; taking the joint characteristics as input, and acquiring the probability that each pixel point belongs to each category; and determining the category with the highest probability as the target category.
(2) And designing a fractional domain Gabor convolution layer based on the two-dimensional fractional domain Gabor filter. Using a fixed set of fractional transformation orders (p) in each convolutional layerx,py) Image features in a fractional domain are extracted. The fractional domain Gabor convolution kernel is obtained by multiplying a two-dimensional fractional domain Gabor filter by a classical convolution kernel point:
Figure BDA0002591279180000104
wherein the content of the first and second substances,
Figure BDA0002591279180000105
i-th fractional Gabor modulation kernel representing the o-th channel of the k-branch, ci,oRepresenting the original convolution kernel of each branch.
(3) Design three layers with a fractional order of (p)x1,py1),(px2,py2),(px3,py3) The fractional domain Gabor convolution layers of (1) have outputs of YGabor1,YGabor2,YGabor3
YGabor1=WGaborYMerge+B
Wherein, WGaborRepresenting the two-dimensional fractional domain Gabor filter bank, B is the bias term. Adding the results of the first three layers in the fourth layer of convolution layer to obtain a multi-fractional domain joint directional Gabor characteristic Y of the fusion dataGabor
(4) Using raw hyperspectral data XHSIAnd carrying out full convolution operation on the spectral information of the hyperspectral image. Firstly, inputting a whole hyperspectral image into a first full convolution layer, wherein the size of a convolution kernel is 1 multiplied by 1, so as to obtain rich spectral characteristics of each pixel point of the hyperspectral image, and each channel of the convolution layer is as follows:
Figure BDA0002591279180000111
wherein Y isiThe ith channel, w, being the convolution outputiIs a convolution kernel, XjI channel output for the previous layer, biFor the bias term of the ith channel, f (x) max (0, x) is the activation function.
(5) Designing a spectrum convolution layer, and adding the results of the first three layers in a fourth layer of the convolution layer to obtain the spectrum characteristic Y of the hyperspectral image dataSpec
Figure BDA0002591279180000112
Wherein the content of the first and second substances,
Figure BDA0002591279180000113
representing the ith channel output of the kth layer.
(6) Multi-fractional-domain joint-directivity Gabor feature Y for acquiring fusion dataGaborSpectral feature Y of hyperspectral image dataSpecAs a joint feature
Y=λGaborYGaborSpecYSpec
Wherein λGaborAnd λSpecThe weighting factors of the Gabor characteristic and the spectral characteristic are adopted.
(7) Inputting the joint feature, the probability that the pixel point located in (u, v) belongs to the kth class can be obtained as follows:
Figure BDA0002591279180000114
wherein Y (u, v) represents the label Y of the (u, v) position pixeli(u, v) represents the output characteristics of the channels corresponding to the fusion layer, and n is the number of categories included in the image in total. And taking the maximum probability of each pixel point to obtain a final classification result.
In addition, in the above embodiment, the spatial resolution of the hyperspectral image is the same as the resolution of the laser radar image, and the spatial resolution of the hyperspectral image and the laser radar image collectively include 11 types of ground objects, and the specific description is given by taking the multisource remote sensing data with the same spatial resolution as an example.
Illustratively, the spatial resolution of the hyperspectral image is 1/2 of the resolution of the lidar image, which contains 20 types of ground objects, and the joint classification method provided by the invention is specifically explained by taking multisource remote sensing data with different spatial resolutions as an example. In this specific example, R of three-dimensional cube dataH、CH、BHThe size may be 4124X 1202X 48, raw lidar data XLiDARThe digital surface model image, the digital elevation model, the normalized digital surface model and the three-waveband laser radar intensity map data with the size of 8248 × 2404 × 7 are included, and the total number of wavebands is seven. Other method steps are the same as the above embodiments and are not described herein.
In addition, the method and the device have the advantages that the multi-order fractional Fourier transform is carried out on the spectrum curve of the pixel point in the hyperspectral image, the statistical distribution characteristics of the image in different transform domains are analyzed, and the discrimination among different types of ground objects is remarkably improved; aiming at multi-scale and multi-modal characteristics in deep learning multi-source remote sensing data, an Octave convolution neural network is used for carrying out frequency domain analysis on a depth network, low-frequency and high-frequency components of the multi-source remote sensing data are separated, characteristic extraction is carried out in different fractional domains, and directional characteristics of the deep network are researched. Thereby realizing high-precision collaborative classification; a hierarchical fusion module is constructed to complete a multi-channel, multi-source domain and multi-hidden layer feature fusion classification model, and spatial, spectral, texture, elevation and other information of multi-source remote sensing data are comprehensively utilized to realize high-precision collaborative classification.
Fig. 2 is a schematic structural diagram of a coastal wetland deep learning classification apparatus according to an embodiment of the present invention, which is suitable for implementing the coastal wetland deep learning classification method according to the embodiment of the present invention. As shown in fig. 2, the apparatus may specifically include a preprocessing module 201, a convolutional layer construction module 202, a data fusion module 203, and a classification module 204.
The preprocessing module 201 is configured to perform correction processing and normalization processing on the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and perform exception point removal processing and normalization processing on the acquired original lidar data to obtain lidar data to be processed; the convolutional layer construction module 202 is configured to construct three layers of Octave convolutional layers in each mode according to the spatial resolution and the number of channels of the to-be-processed hyperspectral image data and the spatial resolution and the number of channels of the to-be-processed lidar data; the data fusion module 203 is used for performing component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed based on three Octave convolutional layers of each mode to obtain feature fusion data; the classification module 204 is configured to extract directional texture information in the feature fusion data, perform spatial, texture, and spectrum joint classification in combination with the to-be-processed hyperspectral data, obtain a target joint classification feature, and determine a target category.
By adopting the technical scheme, the hyperspectral radar and laser radar combined classification can effectively combine and extract a plurality of fractional dimensional features of different sensor data, further comprehensively utilize space, spectrum and texture features, fully mine and utilize the integrity and reliability of multisource data, improve the combined ground object classification performance under different resolutions and different modes, and realize high-precision cooperative classification.
Optionally, the preprocessing module 201 is specifically configured to:
carrying out geometric correction processing and radiation correction processing on the acquired original hyperspectral image data to obtain three-dimensional original hyperspectral image data;
and carrying out normalization processing on the spectral reflectance value of the original hyperspectral image in the three-dimensional form to obtain hyperspectral image data to be processed.
Optionally, the preprocessing module 201 is specifically configured to:
the difference is made between a digital surface model and a digital elevation model in original laser radar data to obtain a normalized digital surface model so as to remove abnormal points;
and respectively carrying out amplitude normalization processing on each wave band by using three-wave band laser radar intensity map data in the original laser radar data to obtain laser radar data to be processed.
Optionally, the data fusion module 203 includes:
the first fusion submodule is used for separating high-frequency components and low-frequency components of the two-dimensional image on an input layer of the three-layer Octave convolution layer, wherein the frequency of the high-frequency components is greater than a set frequency threshold value, and the frequency of the low-frequency components is less than the set frequency threshold value;
the second fusion submodule is used for combining high-frequency components and low-frequency components of each mode aiming at the hyperspectral image data and the laser radar data of each spatial resolution in the middle layer of the three-layer Octave convolution layer;
and the third fusion submodule is used for carrying out frequency component synthesis on each high-frequency component and low-frequency component with different resolutions and different frequencies on the output layer of the three-layer Octave convolution layer to obtain characteristic fusion data.
Optionally, the third fusion submodule is specifically configured to:
integrating high-frequency data of the hyperspectral image data and low-frequency data of the laser radar image data to obtain first fusion layer data;
and integrating the data of the first fusion layer and the high-frequency data of the laser radar image to obtain data of a second fusion layer as feature fusion data.
Optionally, the classification module 204 is specifically configured to:
designing a plurality of two-dimensional fractional domain Gabor filters with different fractional orders according to the feature fusion data to extract directional texture information of the feature fusion data;
designing a score domain Gabor convolution layer according to a two-dimensional score domain Gabor filter;
applying a fractional domain Gabor convolution layer, and performing full convolution operation on hyperspectral image data to be processed and laser radar data to be processed;
setting a spectrum convolution layer, and summing the hyperspectral image data to be processed by applying the spectrum convolution layer to obtain the spectral characteristics of the hyperspectral image data;
taking the weighted sum of the directional texture information of the fusion data and the spectral feature of the hyperspectral image data as a joint feature;
taking the joint characteristics as input, and acquiring the probability that each pixel point belongs to each category;
and determining the category with the highest probability as the target category.
Optionally, the two-dimensional fractional domain Gabor filter is a multiplication of a gaussian function and a sinusoidal plane wave.
The coastal wetland deep learning classification device provided by the embodiment of the invention can execute the coastal wetland deep learning classification method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
An embodiment of the present invention further provides an apparatus, please refer to fig. 3, fig. 3 is a schematic structural diagram of an apparatus, and as shown in fig. 3, the apparatus includes: a processor 310, and a memory 320 coupled to the processor 310; the memory 320 is used for storing a computer program, and the computer program is at least used for executing the coastal wetland deep learning classification method in the embodiment of the invention; the processor 310 is used for calling and executing the computer program in the memory; the coastal wetland deep learning classification at least comprises the following steps: correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain lidar data to be processed; constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the laser radar data to be processed; based on the three-layer Octave convolution layer of each mode, performing component separation, component combination and frequency component synthesis on hyperspectral image data to be processed and laser radar data to be processed to obtain feature fusion data; directional texture information in the feature fusion data is extracted, and spatial, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
The embodiment of the present invention further provides a storage medium, where the storage medium stores a computer program, and when the computer program is executed by a processor, the method implements the steps in the method for classifying coastal wetland deep learning according to the embodiment of the present invention: correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain lidar data to be processed; constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the laser radar data to be processed; based on the three-layer Octave convolution layer of each mode, performing component separation, component combination and frequency component synthesis on hyperspectral image data to be processed and laser radar data to be processed to obtain feature fusion data; directional texture information in the feature fusion data is extracted, and spatial, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A coastal wetland deep learning classification method is characterized by comprising the following steps:
correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain lidar data to be processed;
constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed;
based on the three Octave convolutional layers of each mode, performing component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed to obtain feature fusion data;
directional texture information in the feature fusion data is extracted, and spatial, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
2. The method according to claim 1, wherein the step of performing correction processing and normalization processing on the collected original hyperspectral image data to obtain hyperspectral image data to be processed comprises the following steps:
carrying out geometric correction processing and radiation correction processing on the acquired original hyperspectral image data to obtain three-dimensional original hyperspectral image data;
and carrying out normalization processing on the spectral reflectance value of the original hyperspectral image in the three-dimensional form to obtain hyperspectral image data to be processed.
3. The method according to claim 1, wherein the performing exception point removal processing and normalization processing on the collected raw lidar data to obtain the lidar data to be processed comprises:
applying a difference between a digital surface model and a digital elevation model in the original laser radar data to obtain a normalized digital surface model so as to remove abnormal points;
and respectively carrying out amplitude normalization processing on each wave band by using the three-wave-band laser radar intensity map data in the original laser radar data to obtain laser radar data to be processed.
4. The method according to claim 1, wherein the performing component separation, component combination and frequency component synthesis on the to-be-processed hyperspectral image data and the to-be-processed lidar data based on the three-layer Octave convolutional layer in each mode to obtain feature fusion data comprises:
separating high-frequency components and low-frequency components of the two-dimensional image on an input layer of the three Octave convolutional layers, wherein the frequency of the high-frequency components is greater than a set frequency threshold value, and the frequency of the low-frequency components is less than the set frequency threshold value;
combining high-frequency components and low-frequency components of each mode aiming at hyperspectral image data and laser radar data of each spatial resolution in an intermediate layer of the three-layer Octave convolutional layer;
and performing frequency component synthesis on each high-frequency component and low-frequency component with different resolutions and different frequencies on the output layer of the three-layer Octave convolution layer to obtain feature fusion data.
5. The method of claim 4, wherein the obtaining feature fusion data by performing frequency component synthesis on each of the high frequency component and the low frequency component with different resolutions and different frequencies at an output layer of the three-layer Octave convolutional layer comprises:
integrating high-frequency data of the hyperspectral image data and low-frequency data of the laser radar image data to obtain first fusion layer data;
and synthesizing the data of the first fusion layer and the high-frequency data of the laser radar image to obtain data of a second fusion layer as feature fusion data.
6. The method according to claim 1, wherein the extracting directional texture information in the feature fusion data, and performing spatial, texture and spectral joint classification in combination with to-be-processed hyperspectral data to obtain a target joint classification feature to determine a target category comprises:
designing a plurality of two-dimensional fractional domain Gabor filters with different fractional orders according to the feature fusion data to extract directional texture information of the feature fusion data;
designing a score domain Gabor convolution layer according to the two-dimensional score domain Gabor filter;
applying the fractional domain Gabor convolution layer to perform full convolution operation on hyperspectral image data to be processed and laser radar data to be processed;
setting a spectrum convolution layer, and summing the hyperspectral image data to be processed by applying the spectrum convolution layer to obtain the spectral characteristics of the hyperspectral image;
taking the weighted sum of the directional texture information of the fusion data and the spectral feature of the hyperspectral image data as a joint feature;
taking the combined features as input, and acquiring the probability that each pixel point belongs to each category;
and determining the category with the highest probability as the target category.
7. The method of claim 6, wherein the two-dimensional fractional domain Gabor filter is a multiplication of a Gaussian function and a sinusoidal plane wave.
8. The device for deep learning and classifying the coastal wetland is characterized by comprising:
the preprocessing module is used for correcting and normalizing the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and performing exception point removal processing and normalization processing on the acquired original lidar data to obtain lidar data to be processed;
the convolutional layer construction module is used for constructing three layers of Octave convolutional layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed;
the data fusion module is used for carrying out component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed based on the three Octave convolutional layers of each mode to obtain feature fusion data;
and the classification module is used for extracting directional texture information in the feature fusion data, and performing space, texture and spectrum combined classification by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
9. An apparatus, comprising:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program, and the computer program is at least used for executing the coastal wetland deep learning classification method of any one of claims 1 to 7;
the processor is used for calling and executing the computer program in the memory.
10. A storage medium storing a computer program, wherein the computer program is executed by a processor to implement the steps of the coastal wetland deep learning classification method according to any one of claims 1 to 7.
CN202010701215.1A 2020-07-20 2020-07-20 Coastal wetland deep learning classification method, device, equipment and storage medium Active CN111898662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010701215.1A CN111898662B (en) 2020-07-20 2020-07-20 Coastal wetland deep learning classification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010701215.1A CN111898662B (en) 2020-07-20 2020-07-20 Coastal wetland deep learning classification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111898662A true CN111898662A (en) 2020-11-06
CN111898662B CN111898662B (en) 2023-01-06

Family

ID=73189546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010701215.1A Active CN111898662B (en) 2020-07-20 2020-07-20 Coastal wetland deep learning classification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111898662B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464891A (en) * 2020-12-14 2021-03-09 湖南大学 Hyperspectral image classification method
CN113361407A (en) * 2021-06-07 2021-09-07 上海海洋大学 PCANet-based space spectrum feature and hyperspectral sea ice image combined classification method
WO2022109945A1 (en) * 2020-11-26 2022-06-02 深圳大学 Hyperspectral and lidar joint classification method based on scale adaptive filtering
CN114707595A (en) * 2022-03-29 2022-07-05 中国科学院精密测量科学与技术创新研究院 Hyperspectral laser radar multichannel weighting system and method based on Spark
CN116894972A (en) * 2023-06-25 2023-10-17 耕宇牧星(北京)空间科技有限公司 Wetland information classification method and system integrating airborne camera image and SAR image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015207235A (en) * 2014-04-23 2015-11-19 日本電気株式会社 Data fusion device, land coverage classification system, method and program
CN109993220A (en) * 2019-03-23 2019-07-09 西安电子科技大学 Multi-source Remote Sensing Images Classification method based on two-way attention fused neural network
CN111191736A (en) * 2020-01-05 2020-05-22 西安电子科技大学 Hyperspectral image classification method based on depth feature cross fusion
CN111242228A (en) * 2020-01-16 2020-06-05 武汉轻工大学 Hyperspectral image classification method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015207235A (en) * 2014-04-23 2015-11-19 日本電気株式会社 Data fusion device, land coverage classification system, method and program
CN109993220A (en) * 2019-03-23 2019-07-09 西安电子科技大学 Multi-source Remote Sensing Images Classification method based on two-way attention fused neural network
CN111191736A (en) * 2020-01-05 2020-05-22 西安电子科技大学 Hyperspectral image classification method based on depth feature cross fusion
CN111242228A (en) * 2020-01-16 2020-06-05 武汉轻工大学 Hyperspectral image classification method, device, equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022109945A1 (en) * 2020-11-26 2022-06-02 深圳大学 Hyperspectral and lidar joint classification method based on scale adaptive filtering
CN112464891A (en) * 2020-12-14 2021-03-09 湖南大学 Hyperspectral image classification method
CN112464891B (en) * 2020-12-14 2023-06-16 湖南大学 Hyperspectral image classification method
CN113361407A (en) * 2021-06-07 2021-09-07 上海海洋大学 PCANet-based space spectrum feature and hyperspectral sea ice image combined classification method
CN114707595A (en) * 2022-03-29 2022-07-05 中国科学院精密测量科学与技术创新研究院 Hyperspectral laser radar multichannel weighting system and method based on Spark
CN114707595B (en) * 2022-03-29 2024-01-16 中国科学院精密测量科学与技术创新研究院 Spark-based hyperspectral laser radar multichannel weighting system and method
CN116894972A (en) * 2023-06-25 2023-10-17 耕宇牧星(北京)空间科技有限公司 Wetland information classification method and system integrating airborne camera image and SAR image
CN116894972B (en) * 2023-06-25 2024-02-13 耕宇牧星(北京)空间科技有限公司 Wetland information classification method and system integrating airborne camera image and SAR image

Also Published As

Publication number Publication date
CN111898662B (en) 2023-01-06

Similar Documents

Publication Publication Date Title
CN111898662B (en) Coastal wetland deep learning classification method, device, equipment and storage medium
Javhar et al. Comparison of multi-resolution optical Landsat-8, Sentinel-2 and radar Sentinel-1 data for automatic lineament extraction: A case study of Alichur area, SE Pamir
Wang et al. Review of pulse-coupled neural networks
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
Small Spatiotemporal dimensionality and Time-Space characterization of multitemporal imagery
Zhang et al. GWL_FCS30: global 30 m wetland map with fine classification system using multi-sourced and time-series remote sensing imagery in 2020
Dibs et al. Multi-fusion algorithms for detecting land surface pattern changes using multi-high spatial resolution images and remote sensing analysis
Xiang et al. Visual attention and background subtraction with adaptive weight for hyperspectral anomaly detection
CN111008664B (en) Hyperspectral sea ice detection method based on space-spectrum combined characteristics
Zhao et al. Airborne multispectral LiDAR point cloud classification with a feature reasoning-based graph convolution network
CN115019178A (en) Hyperspectral image classification method based on large kernel convolution attention
CN112258523A (en) Method for finely extracting enteromorpha coverage information of medium-low resolution remote sensing image
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network
CN117788296B (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
Bera et al. Advances in Hyperspectral Image Classification Based on Convolutional Neural Networks: A Review.
CN116403121A (en) Remote sensing image water area segmentation method, system and equipment for multi-path fusion of water index and polarization information
CN112989940B (en) Raft culture area extraction method based on high-resolution third satellite SAR image
Shafaey et al. Pixel-wise classification of hyperspectral images with 1D convolutional SVM networks
CN115578632A (en) Hyperspectral image classification method based on expansion convolution
CN109461176A (en) The spectrum method for registering of high spectrum image
Kang et al. Two Dimensional Spectral Representation
Fu et al. Deep learning-based hydrothermal alteration mapping using GaoFen-5 hyperspectral data in the Duolong Ore District, Western Tibet, China
Li et al. An effective multimodel fusion method for SAR and optical remote sensing images
Borzov et al. Analysis of the efficiency of classification of hyperspectral satellite images of natural and man-made areas
CN115456957B (en) Method for detecting change of remote sensing image by full-scale feature aggregation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant