CN114862731B - Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information - Google Patents

Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information Download PDF

Info

Publication number
CN114862731B
CN114862731B CN202210319962.8A CN202210319962A CN114862731B CN 114862731 B CN114862731 B CN 114862731B CN 202210319962 A CN202210319962 A CN 202210319962A CN 114862731 B CN114862731 B CN 114862731B
Authority
CN
China
Prior art keywords
image
fusion
network
spatial
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210319962.8A
Other languages
Chinese (zh)
Other versions
CN114862731A (en
Inventor
张洪艳
王文高
曹伟男
杨光义
张良培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202210319962.8A priority Critical patent/CN114862731B/en
Publication of CN114862731A publication Critical patent/CN114862731A/en
Application granted granted Critical
Publication of CN114862731B publication Critical patent/CN114862731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-hyperspectral image fusion method guided by low-rank priori and empty spectrum information, and provides a brand new multi-layer multi-branch fusion network SSLRNet combining empty spectrum guidance and low-rank priori, wherein the network firstly constructs a multi-layer multi-branch fusion sub-network (MLMB) aiming at extracting features from multiple branches and carrying out multi-layer feature fusion to reconstruct a primary fusion image. And then constructing a fused image space spectrum correction sub-network based on spatial spectrum guidance, and performing space spectrum guidance on the primary fused image generated by MLMB through the multi-spectrum image band superposition summation image and the hyperspectral image band average image to reduce space spectrum distortion. And finally, constructing a fusion image low-rank priori constraint sub-network based on a low-rank neural network, combining the fusion image low-rank priori constraint sub-network with a deep learning network, and carrying out low-rank decomposition by utilizing the characteristics of the network, so that the fusion result meets the real application requirements. The invention improves the integration precision of the network and meets the real application requirements better.

Description

Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information
Technical Field
The invention relates to the field of hyperspectral image and multispectral image fusion, in particular to a technical method for reducing spatial distortion and spectral distortion generated in a fusion process through spatial spectrum guidance and a technical method for enabling a fused image to more meet the actual application requirements through embedding a low-rank priori in a neural network, and mapping from a low-resolution image to a high-resolution image is completed in a data driving mode, so that the hyperspectral image and the multispectral image are effectively fused.
Background
The hyperspectral image generally has several tens of hundreds of wave bands with narrow spectral ranges, namely has higher spectral resolution, and can simulate very fine spectral curves by utilizing the characteristics, and can identify different types of ground object materials and characteristics, so that the hyperspectral image is widely applied to a series of tasks such as image classification, target identification, transformation detection and the like. However, the sensor imaging must maintain a certain signal-to-noise ratio, and the spatial resolution of the hyperspectral image tends to be low due to the limitation of the imaging system, which greatly limits the application of the hyperspectral image. In contrast, multispectral imaging has a smaller number of bands and lower spectral resolution, but has higher spatial resolution, which can provide fine textures and geometric features. Therefore, reconstructing a hyperspectral image with high spatial resolution by fusing the hyperspectral image with the multispectral image is a very important task.
At present, methods for fusing hyperspectral images and multispectral images are mainly classified into four types:
Full color sharpening-based method: the method generally treats the fusion of the hyperspectral image and the multispectral image as a plurality of full-color sharpening sub-problems for fusion, wherein the hyperspectral wave band comprises a multispectral wave band range coverable part and an uncovered part, and the multispectral wave band range is resampled for the uncovered part, so that the fusion is carried out by using the full-color sharpening method. Although the method improves the spatial resolution of the hyperspectral image to a certain extent, the whole fusion problem is regarded as a sub-problem of full-color sharpening, so that the spatial structure information of the multispectral image is not fully utilized.
Matrix decomposition based method: the method generally decomposes a three-dimensional hyperspectral image and a multispectral image into two matrixes, namely an end member matrix and an abundance matrix according to spectrum dimensions, optimizes the matrixes by a series of optimization methods until convergence, and then obtains the hyperspectral image with high space division rate. However, the continuity of the spatial and spectral dimension information is destroyed by converting the three-dimensional tensor into the two-dimensional matrix, and meanwhile, the method only considers the spectral correlation of the hyperspectral image, but neglects the non-local similarity, so that the fusion precision is not high.
Tensor decomposition-based method: different from matrix decomposition methods, the method generally adopts cp decomposition or Take decomposition to decompose a three-dimensional hyperspectral image and a multispectral image into a core tensor and a dictionary with three dimensions, and then optimizes until convergence by a series of optimization methods, so that the hyperspectral image with high spatial resolution is obtained. However, expanding the tensor into a matrix along each mode leads to destruction of spatial structure information of the tensor, so that it is difficult to store intrinsic structure information of the tensor, and the optimal value of the tensor kernel norm weight is selected with ambiguity, and the weight is determined empirically, so that fusion accuracy is not high.
Deep learning-based method: the deep learning-based approach gradually replaces the first three approaches with their powerful nonlinear simulation capabilities. In general, the method performs downsampling on an original hyperspectral image and a multispectral image, takes the downsampled image as an input, takes the original hyperspectral image as a target image, trains a deep learning network, and inputs the hyperspectral image and the multispectral image into the network after the network training is finished, so that the required hyperspectral image with high resolution can be generated. The core of the method is to construct an excellent deep learning network, so that the nonlinear relation between the hyperspectral image and the hyperspectral image with high resolution can be better simulated, and further, the high-quality hyperspectral image with high resolution can be generated.
In the fusion method based on deep learning, hyperspectral images and multispectral images are generally overlapped to serve as network input, so that the hyperspectral images and the multispectral images are difficult to reserve well, the respective characteristics of the multispectral images and the hyperspectral images are extracted, and the difficulty of fusion is increased. At the same time, it lacks efficient spatial spectral guidance, with severe spatial spectral distortions. Furthermore, the whole fusion process has the defect of lacking physical interpretation, and the inherent priori characteristic of the hyperspectral image is easily ignored, so that the fusion image does not necessarily meet the actual application requirements.
It can be seen that no very ideal method of fusion of hyperspectral images with multispectral images has emerged.
Disclosure of Invention
Aiming at the defects in the existing hyperspectral image and multispectral image fusion method based on deep learning, the invention provides a multi-hyperspectral image fusion method guided by low-rank priori and empty spectrum information, the multi-layer multi-branch sub-network is constructed to perform preliminary fusion, meanwhile, the space spectrum distortion generated in the fusion process of the space spectrum correction sub-network based on the empty spectrum guidance is constructed, finally, the fusion image low-rank priori constraint sub-network based on the low-rank neural network is constructed, and the low-rank priori constraint sub-network is combined with the deep learning network to perform low-rank decomposition by utilizing the characteristics of the network, so that the low-rank constraint of the fusion image is performed, and the fusion result meets the real application requirements.
The technical scheme of the invention provides a multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information, which comprises the following steps of:
step 1, carrying out Gaussian filtering and downsampling on a given hyperspectral image and multispectral, generating an input during network training, wherein an original hyperspectral image is used as a target image for calculating a loss function;
Step 2, constructing a multi-layer multi-branch network combining spatial spectrum guidance and low-rank priori, inputting a down-sampled hyperspectral image and a multispectral image, setting super parameters of the network, and training the network according to a loss function until training converges or the maximum training round number is reached;
the specific structure of the multi-layer multi-branch network combining the empty spectrum guidance and the low-rank priori is constructed as follows;
Firstly, constructing a multi-layer multi-branch fusion sub-network MLMB, extracting features from a plurality of branches, carrying out multi-layer feature fusion, and reconstructing a primary fusion image, wherein the features of a hyperspectral image and a multispectral image are effectively extracted in a multi-branch mode, and the fusion among the features is more sufficient through a multi-layer fusion strategy, and the multi-layer multi-branch fusion sub-network comprises a feature extraction part, a feature fusion part and an image reconstruction part;
Then, constructing a fused image space spectrum correction sub-network based on spatial spectrum guidance, and guiding the space reconstruction of the fused image to avoid the space distortion of the fused image by using the gray value obtained by superposition of the low-spectrum resolution high-space resolution image wave bands to obtain the fused image after space constraint; the spectrum reconstruction of the fusion image is guided by utilizing the average gray value of each band of the low-spatial-resolution high-spectral-resolution image to avoid the spectrum distortion of the fusion image, and the fusion image after spatial spectrum constraint is obtained;
And 3, inputting the original hyperspectral image and the multispectral image into a network after the network training is finished, so as to realize fusion between the images and obtain the hyperspectral image with high resolution.
Further, in step 1, the original hyperspectral image is first gaussian filtered with the original multispectral image/> using a gaussian convolution check:
The formula (1) is a two-dimensional Gaussian kernel formula, wherein sigma represents standard deviation, x and y represent horizontal-vertical coordinate differences from corresponding pixels to central pixels respectively, and then bilinear downsampling is carried out on the filtered image to finally obtain training input of the network constructed in the step 2: hyperspectral image and multispectral image/> where w=rw, h=rh, L > L where W, H, L represent the length, width and band number of the multispectral image, W, H, L represent the length, width and band number of the hyperspectral image, r represent the ratio between the multispectral length width and the hyperspectral length width, respectively, while original hyperspectral image is the target image/>, when the network is trained
Further, in step 2, in the feature extraction part of MLMB networks, a multi-branch feature extraction mode is adopted to fully extract features of hyperspectral images and multispectral images, and the specific process is as follows: firstly, in order to avoid mutual interference between two image features, two images are input independently, a multispectral image feature extraction branch and a hyperspectral image feature extraction branch are constructed, and the same feature extraction module is adopted in the two branches to effectively extract features; secondly, in the feature extraction module, a deep feature extraction branch is constructed by utilizing a jump connection mode, seven convolution layers are used for feature extraction, the results of the first convolution layer and the third convolution layer are added to be used as the input of a fourth convolution layer, the results of the fourth convolution layer and the sixth convolution layer are added to be used as the input of a seventh convolution layer, and the output of the seventh convolution layer is used as the final feature extracted by the deep branch; the shallow feature extraction branch uses three different convolution layers to sequentially extract features, and the result of the third convolution layer is used as the final feature extracted by the shallow branch.
Further, in step 2, at the feature fusion part of MLMB networks, a multi-level fusion strategy is adopted to make feature fusion more complete, where the multi-level fusion strategy refers to deep feature fusion, shallow feature fusion, and deep and shallow feature fusion, and the specific process is as follows: firstly, fusion of deep features of a multispectral image and a hyperspectral image is performed by using three-layer convolution, so as to obtain deep features of the fused image; meanwhile, three-layer convolution is also used for fusing the shallow features of the multispectral image and the hyperspectral image so as to obtain the shallow features of the fused image; finally, fusing the deep layer features and the shallow layer features of the fused image through three-layer convolution to obtain the features of the fused image, wherein the features are used for reconstructing the later image;
Further, in step 2, in the reconstruction portion of MLMB, three deconvolution layers are used for reconstruction, and all deconvolution layers are activated by using a ReLU activation function, which is used to change the negative value of the input to 0, as shown in the following formula:
f(x)=max(0,x) (2)
Here, x represents an input value of the function.
Further, the specific processing procedure of the multi-layer multi-branch fusion subnetwork MLMB is expressed by the following formula;
ZMLMB=Re(F(FD(D(X)+D(Y))+FS(S(X)+S(Y)))) (3)
Where D represents generating deep feature extraction branches, S represents shallow feature extraction branches, S represents deep feature fusion layers, F S represents shallow feature fusion layers, F represents deep shallow feature fusion layers, R e represents image reconstruction layers, and Z MLMB represents primary fusion images through MLMB modules.
Further, in step 2, spatial distortion of the fused image is avoided by the spatial guiding portion Spag, and a fused image subjected to spatial constraint is obtained;
The spatial guiding part Spag takes a multispectral image as input, takes a spatial guiding value as output, and consists of six convolution layers, the left three convolution layers encode the multispectral image mainly in a convolution mode, the right three convolution layers decode the multispectral image, the output of the second convolution layer and the output of the third convolution layer are overlapped in a jump connection mode to serve as the input of a fourth convolution layer, the output of the fourth convolution layer and the output of the first convolution layer are overlapped to serve as the input of a fifth convolution layer, the output of the fifth convolution layer is input into a sixth convolution layer to obtain a final spatial constraint value, wherein the first five convolution layers use a ReLU activation function, and the sixth convolution layer use a Sigmoid activation function, and the activation function can normalize the input value from 0 to 1, and the specific formula is as follows:
Here, x represents the input value of the function, and because the Sigmoid activation function is used in the sixth convolution block, the spatial constraint value is between 0 and 1, and then the value is applied to the influence of the primary fusion of the MLMB module to spatially guide and correct the function, so as to obtain a fused image/> with less spatial distortion, which is shown in the following formula:
Here, Z MLMB represents the primary fused image obtained by MLMB, S pag represents the spatial constraint value obtained by Spag, represents pixel-by-pixel multiplication, i represents the corresponding band, and Z Spag represents the fused image after spatial constraint.
Further, in the step2, the spectrum of the fused image is prevented from being distorted by the spectrum guiding part Speg, so that the fused image subjected to spatial spectrum constraint is obtained;
The spectrum guiding part Speg takes a hyperspectral image as input, takes a spectrum guiding value as output, directly obtains the average gray value of each wave band by using a global pooling mode at first, then trains by using two full-connection layers, wherein the first full-connection layer is activated by using a ReLU activation function, the second full-connection layer is activated by using a Sigmoid activation function, and then obtains a spectrum constraint value;
The last use of the network is a Sigmoid activation function, so its value is between 0 and 1, and then the value is applied to the fused image Z Spag subjected to spatial constraint by Spag modules, so that the specific implementation of the whole process of obtaining the fused image with less spectral distortion is as follows:
Where Z Spag represents the fused image after spatial constraint, S peg represents the spectral constraint value obtained by training the Speg module, i represents the corresponding band, and Z Speg represents the fused image after spatial spectral constraint.
Further, step 2 further includes constructing a fused image low rank prior constraint sub-network LRC based on the low rank neural network, and the processing procedure is as follows;
Taking a fusion image Z Speg guided by a spatial spectrum as input, wherein Z Speg is represented by , F is convolved, a deformation matrix/> of an abundance matrix/> and F is obtained through dimension transformation operation, K represents the rank of the matrix, then, U' and F rs are subjected to matrix multiplication to obtain a coefficient matrix/> , the abundance matrix U and the coefficient matrix V are regularized respectively, then, matrix multiplication and convolution are performed to obtain a corresponding low-rank constraint value/> , and finally, the low-rank constraint value/> is added with the fusion image F guided by the spatial spectrum to obtain a final fusion image F 0.
The multi-hyperspectral image fusion method combining the low-rank priori and the empty spectrum information guidance is constructed, a novel multi-layer multi-branch fusion network SSLRNet combining the empty spectrum guidance and the low-rank priori is constructed, a multi-layer multi-branch fusion sub-network (MLMB) is firstly constructed, feature extraction is carried out from a plurality of branches, multi-layer feature fusion is carried out, a preliminary fusion image is reconstructed, and mutual interference between hyperspectral images and multispectral image feature extraction existing in the conventional fusion method based on deep learning can be effectively avoided. And constructing a fused image space spectrum correction sub-network based on spatial spectrum guidance, and performing spatial spectrum guidance on the preliminary fused image generated by MLMB through multi-spectral image band superposition summation image and hyperspectral image band average image to reduce spatial spectrum distortion generated in the fusion process. Constructing a low-rank priori constraint sub-network of the fusion image based on the low-rank neural network, and combining the low-rank priori with the deep learning network to perform low-rank decomposition by utilizing the characteristics of the network, so that the low-rank constraint of the fusion image is performed, and the fusion result meets the real application requirements.
Meanwhile, the invention not only can fusion images by multi-branch extraction and multi-layer fusion, but also can reduce spatial spectrum distortion generated in the fusion process by guiding spatial spectrum information, thereby improving the fusion precision of the network. In addition, the fusion image can be more in line with the real application requirement through low-rank priori constraint. The method provides important basis and support for the subsequent image classification, target identification, transformation detection and other applications of hyperspectral images. Therefore, the multi-hyperspectral image fusion method guided by the low-rank priori and the empty spectrum information has very important academic value and important practical significance.
Drawings
Fig. 1 is an overall network configuration diagram of the present invention.
Fig. 2 is a diagram of a multi-layer multi-branch fusion subnetwork of the present invention.
Fig. 3 is a spatial guidance diagram of a fused image spatial spectrum correction sub-network based on spatial guidance in accordance with the present invention.
Fig. 4 is a spectrum guidance diagram in the fused image space spectrum correction sub-network based on the spatial spectrum guidance of the present invention.
Fig. 5 is a low rank prior constrained sub-network diagram of a fused image based on a low rank neural network of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, a low rank prior and spatial spectrum information-guided multi-hyperspectral image fusion method according to one embodiment of the present invention is described in further detail below with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information, which comprises the following steps of:
Step 1, gaussian filtering and downsampling are carried out on a given hyperspectral image and a multispectral image, input during network training is generated, and an original hyperspectral image is used as a target image to calculate a loss function.
And 2, constructing a multi-layer multi-branch network combining spatial spectrum guidance and low-rank priori, inputting the down-sampled hyperspectral image and multispectral image, setting super parameters of the network, and training the network according to the loss function until training converges or the maximum training round number is reached.
And 3, inputting the original hyperspectral image and the multispectral image into a network after the network training is finished, so as to realize fusion between the images and obtain the hyperspectral image with high resolution.
In step 1, the original hyperspectral image and the original multispectral image/> are first gaussian filtered using a gaussian convolution check:
Equation (1) is a two-dimensional gaussian kernel equation, where σ represents the standard deviation and x and y represent the difference between the horizontal and vertical coordinates of the corresponding pixel to the center pixel, respectively. Then performing bilinear downsampling on the filtered image to finally obtain a training input hyperspectral image and a multispectral image/> of the network constructed in the step 2, wherein W, H and L respectively represent the length, width and band number of the multispectral image, W, H and L respectively represent the length, width and band number of the hyperspectral image, r represents the ratio between the multispectral length width and the hyperspectral length width, and meanwhile, the original hyperspectral image is used as a target image/> during network training
In step 2, the constructed network structure is shown in fig. 1.
Firstly, a multi-layer multi-branch fusion sub-network MLMB is constructed, which aims at extracting features from multiple branches, performing multi-layer feature fusion, and reconstructing a primary fusion image. The main idea is that the characteristics of hyperspectral images and multispectral images are effectively extracted in a multi-branch mode, the characteristics are fused more fully through a multi-level fusion strategy, and the main part is divided into a characteristic extraction part, a characteristic fusion part and an image reconstruction part, wherein the specific structure is shown in figure 2.
In the feature extraction part of MLMB network, we construct a multi-branch feature extraction method aimed at fully extracting the features of hyperspectral image and multispectral image. Firstly, in order to avoid the incomplete feature extraction caused by the mutual interference between the features of two images, we abandoned the traditional training mode of directly superposing the two images and then taking the two images as the input of the network, and input the two images separately to construct a multispectral image feature extraction branch and a hyperspectral image feature extraction branch, and adopt the same feature extraction module to effectively extract the features in the two branches. In the feature extraction module, considering features of deep and shallow layers of an image, we construct a deep feature extraction branch relatively deep by using a jump connection mode, wherein seven convolution layers are mainly used for feature extraction, the results of the first convolution layer and the third convolution layer are added to be used as the input of the fourth convolution layer, the results of the fourth convolution layer and the sixth convolution layer are added to be used as the input of the seventh convolution layer, and the output of the seventh convolution layer is used as the final feature extracted by the deep feature extraction branch. The shallow feature extraction branch structure is relatively simple, three different convolution layers are mainly used for sequentially extracting features, and the result of the third convolution layer is used as the final feature extracted by the shallow branch. Therefore, when the deep layer characteristics and the shallow layer characteristics of the multispectral image and the deep layer characteristics and the shallow layer characteristics of the hyperspectral image are extracted at the same time, the interference between the characteristics can be effectively avoided.
In the feature fusion part of MLMB network, we do not directly fuse the four extracted features, but construct a strategy called multi-level fusion, so as to make feature fusion more complete, where the multi-level fusion strategy mainly refers to deep feature fusion, shallow feature fusion and deep and shallow feature fusion. First, three-layer convolution is used to fuse the deep features of the multispectral image and the hyperspectral image to obtain the deep features of the fused image. Meanwhile, three-layer convolution is also used for fusing the shallow features of the multispectral image and the hyperspectral image so as to obtain the shallow features of the fused image. And finally, fusing the deep layer features and the shallow layer features of the fused image through three-layer convolution to obtain the features of the fused image, wherein the features are used for reconstructing the later image. The part is classified and fused, and the characteristic fusion is more complete by progressive layer by layer.
In the reconstruction portion of MLMB network, three deconvolution layers are used for reconstruction. In this MLMB module, both the convolutional and deconvolution layers are activated using the ReLU activation function, which mainly functions to change the negative value of the input to 0, as shown in the following equation:
f(x)=max(0,x) (2)
Here, x represents an input value of the function.
In general, the MLMB module is configured with a plurality of feature extraction branches and is respectively applied to hyperspectral images and multispectral images to fully extract features of each other to avoid mutual interference, meanwhile, after the features are extracted, a multi-level fusion strategy is adopted to enable feature fusion to be more complete, and finally, reconstruction of images is carried out through deconvolution layers, so that fusion between the images can be well completed, and the method can be expressed by the following formula:
ZMLMB=Re(F(FD(D(X)+D(Y))+FS(S(X)+S(Y)))) (3)
Where D represents generating deep feature extraction branches, S represents shallow feature extraction branches, S represents deep feature fusion layers, F S represents shallow feature fusion layers, F represents deep shallow feature fusion layers, R e represents image reconstruction layers, and Z MLMB represents primary fusion images through MLMB modules.
Secondly, constructing a spatial spectrum correction sub-network of the fusion image based on spatial spectrum guidance, guiding the spatial reconstruction of the fusion image by using gray values obtained by overlapping wave bands of the low-spectral-resolution high-spatial-resolution image to avoid spatial distortion of the fusion image (namely, a spatial guiding part Spag), and guiding the spectral reconstruction of the fusion image by using average gray values of each wave band of the low-spatial-resolution high-spectral-resolution image to avoid spectral distortion of the fusion image (namely, a spectral guiding part Speg).
The space guiding portion Spag is shown in fig. 3 in the attached drawing. The three convolution layers on the left side encode the multispectral image mainly in a convolution mode, the three convolution layers on the right side decode the multispectral image, the output of the second convolution layer and the output of the third convolution layer are overlapped in a jump connection mode in the middle to serve as the input of the fourth convolution layer, the output of the fourth convolution layer and the output of the first convolution layer are overlapped to serve as the input of the fifth convolution layer, and the output of the fifth convolution layer is input into the sixth convolution layer to obtain a final space constraint value.
Wherein the first five convolutional layers use a ReLU activation function and the sixth convolutional layer uses a Sigmoid activation function that normalizes the input value from 0-1, as shown in the following equation:
Here, x represents an input value of the function. Because the Sigmoid activation function is used in the sixth convolution block, the spatial constraint value is between 0 and 1, and then the spatial constraint value is applied to the influence of the primary fusion of the MLMB module to spatially guide and correct the spatial constraint value, so that the specific implementation of the whole process of obtaining the fusion image/> with less spatial distortion is shown in the following formula:
Here, Z MLMB represents the primary fused image obtained by MLMB, S pag represents the spatial constraint value obtained by Spag, represents pixel-by-pixel multiplication, i represents the corresponding band, and Z Spag represents the fused image after spatial constraint.
The specific structure of the spectrum guiding part Speg is shown in fig. 4 in the appendix. The method is characterized in that the method is a relatively simple network, a hyperspectral image is taken as an input, a spectrum guide value is taken as an output, the average gray value of each wave band of the hyperspectral image is directly obtained in a global pooling mode at first, then training is carried out by using two full-connection layers, wherein the first full-connection layer is activated by using a ReLU activation function, the second full-connection layer is activated by using a Sigmoid activation function, and then a spectrum constraint value of the user is obtained.
The last used in the network is the Sigmoid activation function, so its value is between 0-1, and then the spectrum constraint value is applied to the fused image Z Spag subjected to spatial constraint, so that the whole process of obtaining the fused image with less spectrum distortion is shown in the following formula:
Where Z Spag represents the fused image after spatial constraint, S peg represents the spectral constraint value obtained by training the Speg module, i represents the corresponding band, and Z Speg represents the fused image after spatial spectral constraint.
Secondly, constructing a low-rank prior constraint sub-network LRC (fusion of hyperspectral images and multispectral images can be realized without the sub-network) of the fusion image based on the low-rank neural network, utilizing the strong learning capability of deep learning to treat a series of problems of large computational complexity, convex relaxation and the like of the traditional low-rank decomposition, and the network can easily obtain low-rank characteristics through matrix decomposition and reconstruction so as to carry out low-rank constraint on the fusion image, so that the fusion image keeps strong band correlation, avoids noise interference, and meets the actual application requirements more, and the specific structure is shown in figure 5 in the annex.
In the sub-network, a fused image Z Speg subjected to spatial spectrum constraint is taken as input, wherein Z Speg is represented by , F is subjected to convolution, dimensional transformation and other operations to obtain three matrixes of an abundance matrix/> (K represents the matrix rank) and a deformation matrix/> of F, then, U' is subjected to matrix multiplication with F rs to obtain a coefficient matrix , the abundance matrix U and the coefficient matrix V are respectively regularized, then, matrix multiplication and convolution are performed to obtain a corresponding low-rank constraint value/> , and finally, the low-rank constraint value/> is added with the fused image F subjected to spatial spectrum guidance to obtain a final fused image F 0.
Example 1
Step 1, gaussian filtering and downsampling are carried out on a given hyperspectral image and a multispectral image, input during network training is generated, and an original hyperspectral image is used as a target image to calculate a loss function.
In the invention, when the original hyperspectral image is subjected to Gaussian filtering, the size of a filtering kernel is 5 multiplied by 5, and the standard deviation is 2. The filtering is followed by downsampling with a downsampling rate of 3 in a bilinear sampling manner.
In the embodiment, the size of the multispectral image is 2187×2187 and contains 4 wavebands, and the size of the multispectral image is 729×729 and contains 150 wavebands. In specific implementation, the multispectral image and the hyperspectral image are subjected to Gaussian filtering with the filtering kernel size of 5×5 and the standard deviation of 2, bilinear downsampling with the downsampling rate of 3 is performed after the filtering, finally the multispectral image with the size of 729×729 and the hyperspectral image with the size of 243×243 are used as inputs of a network, the hyperspectral image with the original size of 729×729 is used as a target image, and a loss function is calculated.
And 2, constructing a multi-layer multi-branch network combining spatial spectrum guidance and low-rank priori, inputting the down-sampled hyperspectral image and multispectral image, setting super parameters of the network, and training the network according to the loss function until training converges or the maximum training round number is reached.
In the specific implementation, the super parameters of the network are randomly determined, in the training stage, an Adam optimizer is adopted to train the network, the loss function is an L2 loss function, the overall learning rate is set to be 1e-4, the batch size is set to be the size of the whole image, and the training round number of the network is 10000. In practice, the super parameters of the network can be adjusted by those skilled in the art according to the specific image used.
And 3, inputting the original hyperspectral image and the multispectral image into a network after the network training is finished, so as to realize fusion between the images and obtain the hyperspectral image with high resolution.
In the implementation, the non-downsampled multispectral image with the size of 2187×2187 and containing 4 wave bands in total and the non-downsampled hyperspectral image with the size of 729×729 and containing 150 wave bands in total are input into a training-completed network to realize the fusion between the images, so as to obtain the hyperspectral image with the size of 2187×2187 and containing 150 wave bands in total.
It can be understood by those skilled in the art that the invention not only performs preliminary fusion by constructing a multi-layer multi-branch sub-network to avoid the mutual interference of two images, but also constructs a fused image space spectrum correction sub-network based on spatial spectrum guidance to reduce the spatial spectrum distortion generated in the fusion process, improves the fusion precision, constructs a fused image low-rank priori constraint sub-network based on a low-rank neural network, and performs low-rank constraint on the fused image to enable the fusion result to more meet the real application requirement.
It should be noted and appreciated that various modifications and improvements of the invention described in detail above can be made without departing from the spirit and scope of the invention as claimed in the appended claims. Accordingly, the scope of the claimed subject matter is not limited by any particular exemplary teachings presented.

Claims (7)

1. The multi-hyperspectral image fusion method guided by low-rank priori and empty spectrum information comprises the following steps:
step 1, carrying out Gaussian filtering and downsampling on a given hyperspectral image and multispectral, generating an input during network training, wherein an original hyperspectral image is used as a target image for calculating a loss function;
Step 2, constructing a multi-layer multi-branch network combining spatial spectrum guidance and low-rank priori, inputting a down-sampled hyperspectral image and a multispectral image, setting super parameters of the network, and training the network according to a loss function until training converges or the maximum training round number is reached;
the specific structure of the multi-layer multi-branch network combining the empty spectrum guidance and the low-rank priori is constructed as follows;
Firstly, constructing a multi-layer multi-branch fusion sub-network MLMB, extracting features from a plurality of branches, carrying out multi-layer feature fusion, and reconstructing a primary fusion image, wherein the features of a hyperspectral image and a multispectral image are effectively extracted in a multi-branch mode, and the fusion among the features is more sufficient through a multi-layer fusion strategy, and the multi-layer multi-branch fusion sub-network comprises a feature extraction part, a feature fusion part and an image reconstruction part;
Then, constructing a fused image space spectrum correction sub-network based on spatial spectrum guidance, and guiding the space reconstruction of the fused image to avoid the space distortion of the fused image by using the gray value obtained by superposition of the low-spectrum resolution high-space resolution image wave bands to obtain the fused image after space constraint; the spectrum reconstruction of the fusion image is guided by utilizing the average gray value of each band of the low-spatial-resolution high-spectral-resolution image to avoid the spectrum distortion of the fusion image, and the fusion image after spatial spectrum constraint is obtained;
In the step 2, spatial distortion of the fused image is avoided through the spatial guiding part Spag, and the fused image after spatial constraint is obtained;
The spatial guiding part Spag takes a multispectral image as input, takes a spatial guiding value as output, and consists of six convolution layers, the left three convolution layers encode the multispectral image mainly in a convolution mode, the right three convolution layers decode the multispectral image, the output of the second convolution layer and the output of the third convolution layer are overlapped in a jump connection mode to serve as the input of a fourth convolution layer, the output of the fourth convolution layer and the output of the first convolution layer are overlapped to serve as the input of a fifth convolution layer, the output of the fifth convolution layer is input into a sixth convolution layer to obtain a final spatial constraint value, wherein the first five convolution layers use a ReLU activation function, and the sixth convolution layer use a Sigmoid activation function, and the activation function can normalize the input value from 0 to 1, and the specific formula is as follows:
Here, x represents the input value of the function, and because the Sigmoid activation function is used in the sixth convolution block, the spatial constraint value is between 0 and 1, and then the value is applied to the influence of the primary fusion of the MLMB module to spatially guide and correct the function, so as to obtain a fused image/> with less spatial distortion, which is shown in the following formula:
Here, Z MLMB represents the preliminary fusion image obtained by MLMB, S pag represents the spatial constraint value obtained by Spag, represents pixel-by-pixel multiplication, i represents the corresponding band, and Z Spag represents the fusion image after spatial constraint;
in the step 2, the spectrum of the fusion image is prevented from being distorted by the spectrum guiding part Speg, and the fusion image subjected to spatial spectrum constraint is obtained;
The spectrum guiding part Speg takes a hyperspectral image as input, takes a spectrum guiding value as output, directly obtains the average gray value of each wave band by using a global pooling mode at first, then trains by using two full-connection layers, wherein the first full-connection layer is activated by using a ReLU activation function, the second full-connection layer is activated by using a Sigmoid activation function, and then obtains a spectrum constraint value;
the last use of the network is a Sigmoid activation function, so its value is between 0 and 1, and then the value is applied to the fused image Z Spag subjected to spatial constraint by Spag modules, so that the specific implementation of the whole process of obtaining the fused image with less spectral distortion is as follows:
Where Z Spag represents the fused image after spatial constraint, S peg represents the spectral constraint value obtained by training the Speg module, i represents the corresponding band, and Z Speg represents the fused image after spatial spectral constraint;
And 3, inputting the original hyperspectral image and the multispectral image into a network after the network training is finished, so as to realize fusion between the images and obtain the hyperspectral image with high resolution.
2. The low rank prior and spatial information guided multi-hyperspectral image fusion method of claim 1, wherein the method comprises the steps of: in step 1, the original hyperspectral image and the original multispectral image are first gaussian filtered using a gaussian convolution check:
The formula (1) is a two-dimensional Gaussian kernel formula, wherein sigma represents standard deviation, x and y represent horizontal-vertical coordinate differences from corresponding pixels to central pixels respectively, and then bilinear downsampling is carried out on the filtered image to finally obtain training input of the network constructed in the step 2: hyperspectral image and multispectral image/> where w=rw, h=rh, L > L where W, H, L represent the length, width and band number of the multispectral image, W, H, L represent the length, width and band number of the hyperspectral image, r represent the ratio between the multispectral length width and the hyperspectral length width, respectively, while original hyperspectral image is the target image/>, when the network is trained
3. The low rank prior and spatial information guided multi-hyperspectral image fusion method of claim 1, wherein the method comprises the steps of: in step 2, in the feature extraction part of MLMB network, a multi-branch feature extraction mode is adopted to fully extract the features of hyperspectral images and multispectral images, and the specific process is as follows: firstly, in order to avoid mutual interference between two image features, two images are input independently, a multispectral image feature extraction branch and a hyperspectral image feature extraction branch are constructed, and the same feature extraction module is adopted in the two branches to effectively extract features; secondly, in the feature extraction module, a deep feature extraction branch is constructed by utilizing a jump connection mode, seven convolution layers are used for feature extraction, the results of the first convolution layer and the third convolution layer are added to be used as the input of a fourth convolution layer, the results of the fourth convolution layer and the sixth convolution layer are added to be used as the input of a seventh convolution layer, and the output of the seventh convolution layer is used as the final feature extracted by the deep feature extraction branch; the shallow feature extraction branch sequentially performs feature extraction by using three different convolution layers, and the result of the third convolution layer is used as the final feature extracted by the shallow feature extraction branch.
4. The low rank prior and spatial information guided multi-hyperspectral image fusion method of claim 3, wherein: in step 2, at the feature fusion part of MLMB network, a multi-level fusion strategy is adopted to make feature fusion more complete, where the multi-level fusion strategy refers to deep feature fusion, shallow feature fusion and deep and shallow feature fusion, and the specific process is as follows: firstly, fusion of deep features of a multispectral image and a hyperspectral image is performed by using three-layer convolution, so as to obtain deep features of the fused image; meanwhile, three-layer convolution is also used for fusing the shallow features of the multispectral image and the hyperspectral image so as to obtain the shallow features of the fused image; and finally, fusing the deep layer features and the shallow layer features of the fused image through three-layer convolution to obtain the features of the fused image, wherein the features are used for reconstructing the later image.
5. The low rank prior and spatial information guided multi-hyperspectral image fusion method of claim 1, wherein the method comprises the steps of: in step 2, in the reconstruction portion of MLMB network, three deconvolution layers are used for reconstruction, and all deconvolution layers are activated by using a ReLU activation function, which is used to change the negative value of the input to 0, as shown in the following formula:
f(x)=max(0,x) (2)
Here, x represents an input value of the function.
6. The low rank prior and spatial information guided multi-hyperspectral image fusion method of claim 1, wherein the method comprises the steps of: the specific processing procedure of the multi-layer multi-branch fusion subnetwork MLMB is expressed by the following formula;
ZMLMB=Re(F(FD(D(X)+D(Y))+FS(S(X)+S(Y)))) (3)
Where D represents generating deep feature extraction branches, S represents shallow feature extraction branches, S represents deep feature fusion layers, F S represents shallow feature fusion layers, F represents deep shallow feature fusion layers, R e represents image reconstruction layers, and Z MLMB represents a preliminary fusion image through MLMB modules.
7. The low rank prior and spatial information guided multi-hyperspectral image fusion method of claim 1, wherein the method comprises the steps of: the step 2 also comprises the steps of constructing a fused image low-rank priori constraint sub-network LRC based on a low-rank neural network, and the processing process is as follows;
The fusion image subjected to spatial spectrum constraint is taken as input, the fusion image is represented by , F is subjected to convolution and dimensional transformation operation to obtain three matrixes of an abundance matrix/> and a deformation matrix/> of F, K represents the rank of the matrix, then the matrix multiplication is carried out on U' and F rs to obtain a coefficient matrix/> , the regularization is carried out on the abundance matrix U and the coefficient matrix V respectively, then the matrix multiplication and the convolution are carried out to obtain a corresponding low rank constraint value/> , and finally the low rank constraint value/> is added with the fusion image F subjected to spatial spectrum guidance to obtain a final fusion image F 0.
CN202210319962.8A 2022-03-29 2022-03-29 Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information Active CN114862731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210319962.8A CN114862731B (en) 2022-03-29 2022-03-29 Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210319962.8A CN114862731B (en) 2022-03-29 2022-03-29 Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information

Publications (2)

Publication Number Publication Date
CN114862731A CN114862731A (en) 2022-08-05
CN114862731B true CN114862731B (en) 2024-04-16

Family

ID=82629847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210319962.8A Active CN114862731B (en) 2022-03-29 2022-03-29 Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information

Country Status (1)

Country Link
CN (1) CN114862731B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471437B (en) * 2022-11-14 2023-03-10 中国测绘科学研究院 Image fusion method based on convolutional neural network and remote sensing image fusion method
CN115719309A (en) * 2023-01-10 2023-02-28 湖南大学 Spectrum super-resolution reconstruction method and system based on low-rank tensor network
CN117726916B (en) * 2024-02-18 2024-04-19 电子科技大学 Implicit fusion method for enhancing image resolution fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678723A (en) * 2015-12-29 2016-06-15 内蒙古科技大学 Multi-focus image fusion method based on sparse decomposition and differential image
CN112634137A (en) * 2020-12-28 2021-04-09 西安电子科技大学 Hyperspectral and full-color image fusion method based on AE extraction of multi-scale spatial spectrum features
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
WO2021205735A1 (en) * 2020-04-08 2021-10-14 Mitsubishi Electric Corporation Systems and methods for blind multi- spectral image fusion
CN114119444A (en) * 2021-11-29 2022-03-01 武汉大学 Multi-source remote sensing image fusion method based on deep neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678723A (en) * 2015-12-29 2016-06-15 内蒙古科技大学 Multi-focus image fusion method based on sparse decomposition and differential image
WO2021184891A1 (en) * 2020-03-20 2021-09-23 中国科学院深圳先进技术研究院 Remotely-sensed image-based terrain classification method, and system
WO2021205735A1 (en) * 2020-04-08 2021-10-14 Mitsubishi Electric Corporation Systems and methods for blind multi- spectral image fusion
CN112634137A (en) * 2020-12-28 2021-04-09 西安电子科技大学 Hyperspectral and full-color image fusion method based on AE extraction of multi-scale spatial spectrum features
CN114119444A (en) * 2021-11-29 2022-03-01 武汉大学 Multi-source remote sensing image fusion method based on deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Hypersectral and Multispectral Image Fusion Based on Local Rank and Coupled Spectral Unmixing;Yuan Zhou;IEEE Transactions on Geoscience and Remote Sensing;20170831;第55卷(第10期);第5997-609页 *

Also Published As

Publication number Publication date
CN114862731A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN114862731B (en) Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
Yu et al. Deep iterative down-up cnn for image denoising
CN109035142B (en) Satellite image super-resolution method combining countermeasure network with aerial image prior
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
Lin et al. Hyperspectral image denoising via matrix factorization and deep prior regularization
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
Zhang et al. LR-Net: Low-rank spatial-spectral network for hyperspectral image denoising
Benzenati et al. Two stages pan-sharpening details injection approach based on very deep residual networks
Rivadeneira et al. Thermal image super-resolution challenge-pbvs 2021
Huang et al. Lightweight deep residue learning for joint color image demosaicking and denoising
CN114581347B (en) Optical remote sensing spatial spectrum fusion method, device, equipment and medium without reference image
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN115393191A (en) Method, device and equipment for reconstructing super-resolution of lightweight remote sensing image
CN113902622A (en) Spectrum super-resolution method based on depth prior combined attention
CN113962878B (en) Low-visibility image defogging model method
CN113627487B (en) Super-resolution reconstruction method based on deep attention mechanism
CN116977651B (en) Image denoising method based on double-branch and multi-scale feature extraction
CN113191947B (en) Image super-resolution method and system
CN115861749A (en) Remote sensing image fusion method based on window cross attention
CN114529482A (en) Image compressed sensing reconstruction method based on wavelet multi-channel depth network
CN109785253B (en) Panchromatic sharpening post-processing method based on enhanced back projection
CN111292238A (en) Face image super-resolution reconstruction method based on orthogonal partial least squares

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant