CN114240814B - Multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection - Google Patents

Multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection Download PDF

Info

Publication number
CN114240814B
CN114240814B CN202111557631.XA CN202111557631A CN114240814B CN 114240814 B CN114240814 B CN 114240814B CN 202111557631 A CN202111557631 A CN 202111557631A CN 114240814 B CN114240814 B CN 114240814B
Authority
CN
China
Prior art keywords
spatial
spectrum
image
space
multispectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111557631.XA
Other languages
Chinese (zh)
Other versions
CN114240814A (en
Inventor
袁媛
孙义
张园林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202111557631.XA priority Critical patent/CN114240814B/en
Publication of CN114240814A publication Critical patent/CN114240814A/en
Application granted granted Critical
Publication of CN114240814B publication Critical patent/CN114240814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection, which comprises the steps of firstly dividing a pair of multispectral images and panchromatic images to obtain a plurality of groups of paired multispectral image blocks and panchromatic image blocks with the same content; then, performing feature space mapping and conversion to obtain a spectrum mapping vector and a space mapping vector; extracting features, obtaining injection coefficients according to the spectral feature images and the spatial feature images, and injecting an injection system into the spectral feature images to obtain spatial spectrum combined features; and finally, performing up-sampling on the two rounds to obtain an image with high spatial resolution and high spectral resolution. According to the method, the spatial resolution of the multispectral image is improved by means of the global dependency relationship between the spectral information and the spatial information, so that the fused image is effectively double-fidelity in the spectral and spatial domains.

Description

Multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a multispectral and full-color remote sensing image fusion method.
Background
The remote sensing image fusion task based on the multispectral image and the full-color image aims at: the high-resolution panchromatic image and the low-resolution multispectral image are fully utilized, and the fusion image with high spatial resolution and high spectral resolution is obtained.
Existing fusion methods can be divided into two types of methods, traditional linear and deep learning. The traditional methods adopt space conversion, space decomposition operation or manual addition of certain constraint contents in the fusion process, and have good effects in the past period. However, with the rise of deep learning, the limited learning capacity of the traditional method is exposed, and the deep learning method can objectively and automatically learn the characteristics, and the performance of the fusion method is greatly improved in a very strong nonlinear capacity and a concise optimization mode. The deep learning fusion method currently existing will be described in detail below.
The first is Deng et al in the literature "L.Deng, G.Vivone, C.Jin, and J.Chanussat, detail Injection-Based Deep Convolutional Neural Networks for Pansharpening, IEEE Transactions on Geoscience and Remote Sensing, vol.59, no.8, pp.6995-7010,2021," propose a remote sensing image fusion method using Detail Injection. The work uses differential information of full-color images and up-sampling multispectral images as training input, and uses learning ability of convolutional neural network to generate space detail information required by each wave band for the fused image. However, the method directly carries out simple interpolation processing on the multispectral image to a scale which is in a uniform scale with the panchromatic image, and directly injects the generated space details into the multispectral image, so that the modulation of the space characteristics is ignored, and the spectral characteristics and the space characteristics of the fusion image are destroyed.
The second is Liu et al in literature "L.Liu, J.Wang, E.Zhang, B.Li, X.Zhu, Y.Zhang, and J.Peng, shallow-Deep Convolutional Network and Spectral-dispersion-Based Detail Injection for Multispectral Imagery Pan-Sharping, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.13, pp.1772-1783,2020," propose a Shallow-deep convolutional neural network based on multi-band spatial injection. This work achieves spatial enhancement of multispectral images by generating multiple bands for full-color images. Because the important factor that the space details required by each wave band of the multispectral image are different is considered, the fusion effect is improved. However, the method still directly interpolates the multispectral image to be in a uniform scale with the panchromatic image in the process of extracting the characteristics, and simultaneously, the space information of the panchromatic image is directly injected into the multispectral image after simple weighted average, so that the fused image has very serious spectrum true or space distortion.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection, which comprises the steps of firstly dividing a pair of multispectral images and panchromatic images to obtain a plurality of groups of multispectral image blocks and panchromatic image blocks with the same content; then, performing feature space mapping and conversion to obtain a spectrum mapping vector and a space mapping vector; extracting features, obtaining injection coefficients according to the spectral feature images and the spatial feature images, and injecting an injection system into the spectral feature images to obtain spatial spectrum combined features; and finally, performing up-sampling on the two rounds to obtain an image with high spatial resolution and high spectral resolution. According to the method, the spatial resolution of the multispectral image is improved by means of the global dependency relationship between the spectral information and the spatial information, so that the fused image is effectively double-fidelity in the spectral and spatial domains.
The technical scheme adopted by the invention for solving the technical problems comprises the following steps:
step 1: making an input image;
dividing a pair of multispectral image and panchromatic image to obtain a plurality of groups of paired multispectral image blocks and panchromatic image blocks with the same content;
step 2: feature space mapping and conversion;
inputting a group of paired multispectral image blocks and panchromatic image blocks into a double-flow feature extraction module, wherein the double-flow feature extraction module is divided into two parallel branches, and each branch comprises a plurality of convolution layers and a plurality of residual blocks; the multi-spectrum image block and the full-color image block respectively enter two branches of the double-flow feature extraction module, mapping and conversion from a low-dimensional image space to respective high-dimensional feature spaces are completed after the multi-spectrum image block and the full-color image block respectively pass through a plurality of convolution layers, a spectrum mapping vector is obtained from the multi-spectrum image, and a space mapping vector is obtained from the full-color image block;
step 3: extracting features;
then, respectively inputting the spectrum mapping vector and the space mapping vector into a plurality of residual blocks of each branch to perform feature extraction, and respectively obtaining a spectrum feature map and a space feature map;
step 4: enhancing the space detail of the spectrum characteristic diagram;
step 4-1: linear mapping;
respectively carrying out linear mapping on the spectral feature map and the spatial feature map, and respectively carrying out dimension matching on the spectral vector and the spatial vector which are obtained after the linear mapping, so that matrix multiplication operation can be carried out on the spectral vector and the spatial vector;
step 4-2: generating injection coefficients;
performing matrix operation on the spectrum vector and the space vector matched with the dimensions to obtain global characteristic representation of the spectrum and the space; normalizing the spectrum and space global feature representation to form an injection coefficient of a space feature map;
step 4-3: space injection;
multiplying the injection coefficient of the spatial feature map with the spatial feature map to obtain a calibrated spatial feature map; adding the calibrated spatial features and the spectrum feature images to obtain a spectrum feature image with enhanced spatial details;
step 5: extracting spatial spectrum combined characteristics;
inputting the spectrum characteristic diagram with enhanced space details into a plurality of residual blocks, and extracting characteristics to obtain a space spectrum combined characteristic diagram;
step 6: up-sampling;
up-sampling is carried out on the spatial combined feature map, so that the spatial dimension of the spatial combined feature map is enlarged by 2 times;
step 7: iterative fusion;
returning to the step 4, taking the spatial spectrum combined characteristic diagram as the spectral characteristic diagram in the step 4, keeping the spatial characteristic diagram unchanged, and re-executing the steps 4 to 6 to obtain a new spatial dimension 4-time enlarged spatial spectrum combined characteristic diagram;
step 8: and (3) traversing the paired multispectral image blocks and full-color image blocks with the same content obtained in the step (1), executing the steps (2) to (7), and splicing the spatial spectrum combined feature images generated by all the paired multispectral image blocks and full-color image blocks to obtain the image with high spatial resolution and high spectral resolution.
Preferably, the linear mapping 1x1 in step 4-1 convolves the linear mapping.
Preferably, the upsampling is one of pyramid progressive upsampling, interpolation upsampling, transposed convolution upsampling, and channel shuffling upsampling.
The beneficial effects of the invention are as follows:
1. the cross-scale difference for the source image is more robust. The invention completes the gradual up-sampling of the multispectral image and the feature processing on different scale levels based on the pyramid progressive up-sampling, increases the buffer of the scale difference of the source image and improves the cross-scale learning capacity of the method.
2. The spatial detail injection is more matched. The invention takes the imaging range difference of the multispectral image and the full-color image into consideration, and modulates the extracted spatial characteristics, thereby obtaining the spatial details which are more in line with the spectral characteristics, and reducing the damage to the spectral characteristics and the spatial characteristics.
3. The fusion performance is better, and the reasoning speed is high. According to the invention, the spatial resolution of the multispectral image is improved by means of the global dependency relationship of the spectrum information and the spatial information, so that the fusion image is effectively double-fidelity in the spectrum and the spatial domain, and meanwhile, multiple residual error learning and progressive learning are combined, so that the network convergence is faster, the learning is easier, and a large amount of calculation is avoided.
4. The method has more practical value. The invention is based on the fusion of multispectral and panchromatic remote sensing images, and the fused images with high space and high spectral resolution have a very wide application range. Compared with the source image, the method has higher practical value and more practical fields for tasks such as land coverage classification, change detection, city planning and the like.
Drawings
FIG. 1 is a block diagram of the method of the present invention.
Fig. 2 is a flow chart of the method of the present invention.
FIG. 3 shows the results of the high spatial and high resolution images generated by the method of the present invention and other contrast methods, wherein (a) low resolution multispectral images, (b) full color images, (c) PNN, (d) PanNet, (e) DICNN, (f) DCNN, (g) SRPPNN, and (h) PDIMN. .
Detailed Description
The invention will be further described with reference to the drawings and examples.
The invention discloses a deep learning-based method, which mainly provides a novel fusion frame and a spatial modulation mode, and is respectively suitable for solving the inherent image scale difference of multispectral images and full-color images and the fidelity of spatial modulation on different scales. The invention effectively protects the spectral characteristic and the spatial characteristic of the fusion image, so that the fusion result of the invention can achieve double fidelity in spectrum and space, and the fusion performance is further improved.
As shown in fig. 1 and fig. 2, a method for fusing multispectral and full-color remote sensing images based on pyramid modulation injection comprises the following steps:
step 1: making an input image;
dividing a pair of multispectral image and panchromatic image to obtain a plurality of groups of paired multispectral image blocks and panchromatic image blocks with the same content;
step 2: feature space mapping and conversion;
inputting a group of paired multispectral image blocks and panchromatic image blocks into a double-flow feature extraction module, wherein the double-flow feature extraction module is divided into two parallel branches, and each branch comprises a plurality of convolution layers and a plurality of residual blocks; the multi-spectrum image block and the full-color image block respectively enter two branches of the double-flow feature extraction module, mapping and conversion from a low-dimensional image space to respective high-dimensional feature spaces are completed after the multi-spectrum image block and the full-color image block respectively pass through a plurality of convolution layers, a spectrum mapping vector is obtained from the multi-spectrum image, and a space mapping vector is obtained from the full-color image block;
step 3: extracting features;
then, respectively inputting the spectrum mapping vector and the space mapping vector into a plurality of residual blocks of each branch to perform feature extraction, and respectively obtaining a spectrum feature map and a space feature map;
step 4: enhancing the space detail of the spectrum characteristic diagram;
step 4-1: linear mapping;
respectively carrying out linear mapping on the spectral feature map and the spatial feature map, and respectively carrying out dimension matching on the spectral vector and the spatial vector which are obtained after the linear mapping, so that matrix multiplication operation can be carried out on the spectral vector and the spatial vector; the method comprises the following steps:
the 1x1 convolution maps the spectral feature map linearly once to obtain a vector with a dimension (H, W, C), and converts the dimension into (C, HW), and the 1x1 convolution maps the spatial feature map linearly twice to obtain two vectors with the dimension (H, W, C), and converts the dimension into vectors with the dimensions (HW, C) and (C, HW);
step 4-2: generating injection coefficients;
performing matrix operation on the spectrum vector and the space vector matched with the dimensions to obtain global characteristic representation of the spectrum and the space; normalizing the spectrum and space global feature representation to form an injection coefficient of a space feature map; the method comprises the following steps:
multiplying a vector with a spectral mapping dimension (C, HW) by a vector with a spatial mapping dimension (HW, C) to output a vector with a dimension (C, C); normalizing the vector by using a softmax function, performing matrix multiplication with a vector with a space mapping dimension (C, HW) to obtain a vector with the dimension (C, HW), and finally converting the vector into (H, W, C) to form an injection coefficient;
step 4-3: space injection;
multiplying the injection coefficient of the spatial feature map with the spatial feature map to obtain a calibrated spatial feature map; adding the calibrated spatial features and the spectrum feature images to obtain a spectrum feature image with enhanced spatial details;
step 5: extracting spatial spectrum combined characteristics;
inputting the spectrum characteristic diagram with enhanced space details into a plurality of residual blocks, and extracting characteristics to obtain a space spectrum combined characteristic diagram;
step 6: sampling on a pyramid;
pyramid progressive up-sampling is carried out on the spatial combined feature map, so that the spatial dimension of the spatial combined feature map is enlarged by 2 times;
step 7: iterative fusion;
returning to the step 4, taking the spatial spectrum combined characteristic diagram as the spectral characteristic diagram in the step 4, keeping the spatial characteristic diagram unchanged, and re-executing the steps 4 to 6 to obtain a new spatial dimension 4-time enlarged spatial spectrum combined characteristic diagram;
step 8: and (3) traversing the paired multispectral image blocks and full-color image blocks with the same content obtained in the step (1), executing the steps (2) to (7), and splicing the spatial spectrum combined feature images generated by all the paired multispectral image blocks and full-color image blocks to obtain the image with high spatial resolution and high spectral resolution.
Specific examples:
1. simulation conditions
In this embodiment, simulation by a Pytorch deep learning framework is run on a TITAN-X GPU and a video memory 12G, ubuntu 18.04.04 operating system.
2. Emulation content
The data used in the simulation were 1505 paired multispectral and panchromatic images (32 x 32 and 128x 128 in size respectively) from QuickBird satellites (public dataset), each of which was downsampled to obtain an image at low resolution as training data and itself was present as the true image to be used as a criterion.
To demonstrate the effectiveness of the method, PNN, panNet, MSDCNN, diCNN, DCNN and SRPPNN were chosen as comparison methods on QuickBird satellites.
PNN is a fusion method proposed by the literature "G.Masi, D.Cozzolino, L.Verdoliva, and G.Scarpa, pansharpening by convolutional neural networks, remote Sensing, vol.8, no.7, p.594, 2016";
PanNet is a fusion method proposed by the literature "J.Yang, X.Fu, Y.Hu, Y.Huang, X.Ding, and J.Paisley, panNet: A Deep Network Architecture for Pan-Sharpening, in Proc.IEEE International Conference on Computer Vision,2017, pp.1753-1761";
MSDCNN is a fusion method proposed in the literature "Q.Yuan, Y.Wei, X.Meng, H.Shen, and L.zhang, A Multiscale and Multidepth Convolutional Neural Network for Remote Sensing Imagery Pan-sharping, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.11, no.3, pp.978-989,2018";
DiCNN is a fusion method as proposed in the literature "L.He, Y.Rao, J.Li, J.Chanussot, A.Plaza, J.Zhu and B.Li, pansharpening via Detail Injection Based Convolutional Neural Networks, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.12, no.4, pp.1188-1204,2019";
DCNN is a fusion method as set forth in the literature "L.Deng, G.Vivone, C.Jin, and j. Chanussot, detail Injection-Based Deep Convolutional Neural Networks for Pansharpening, IEEE Transactions on Geoscience and Remote Sensing, vol.59, no.8, pp.6995-7010,2021";
SRPPNN is a fusion method proposed in the literature "J.Cai and B.Huang, super-resolution-guided progressive pansharpening based on a deep convolutional neural network, IEEE Transactions on Geoscience and Remote Sensing, vol.59, no.6, pp.5206-5220,2021".
PDIMN is the result obtained by our method, PSNR, SSIM, CC, SAM is an evaluation index of the quality of the fused image, and the comparison result is shown in table 1:
TABLE 1 QuickBird dataset verification experiment
Method PNN PanNet DiCNN DCDNN SRPPNN PDIMN
PSNR 34.3108 23.6141 29.4532 38.6042 38.5339 38.6292
SSIM 0.9335 0.7230 0.7546 0.9704 0.9703 0.9714
CC 0.9791 0.7724 0.9353 0.9922 0.9921 0.9922
SAM 0.1481 0.1737 0.1443 0.1346 0.1355 0.1345
As can be seen from Table 1, the quality of the fused image of the present invention is significantly better than other methods on the QuickBird database.
FIG. 3 is a result of a high spatial and high resolution image generated for the present invention and other contrast methods. It is seen that the fusion image generated by the prior method has either spectral distortion or spatial distortion. And the fusion image and the real image generated by the method (PDIMN) are almost identical.

Claims (3)

1. The multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection is characterized by comprising the following steps of:
step 1: making an input image;
dividing a pair of multispectral image and panchromatic image to obtain a plurality of groups of paired multispectral image blocks and panchromatic image blocks with the same content;
step 2: feature space mapping and conversion;
inputting a group of paired multispectral image blocks and panchromatic image blocks into a double-flow feature extraction module, wherein the double-flow feature extraction module is divided into two parallel branches, and each branch comprises a plurality of convolution layers and a plurality of residual blocks; the multi-spectrum image block and the full-color image block respectively enter two branches of the double-flow feature extraction module, mapping and conversion from a low-dimensional image space to respective high-dimensional feature spaces are completed after the multi-spectrum image block and the full-color image block respectively pass through a plurality of convolution layers, a spectrum mapping vector is obtained from the multi-spectrum image, and a space mapping vector is obtained from the full-color image block;
step 3: extracting features;
then, respectively inputting the spectrum mapping vector and the space mapping vector into a plurality of residual blocks of each branch to perform feature extraction, and respectively obtaining a spectrum feature map and a space feature map;
step 4: enhancing the space detail of the spectrum characteristic diagram;
step 4-1: linear mapping;
respectively carrying out linear mapping on the spectral feature map and the spatial feature map, and respectively carrying out dimension matching on the spectral vector and the spatial vector which are obtained after the linear mapping, so that matrix multiplication operation can be carried out on the spectral vector and the spatial vector;
step 4-2: generating injection coefficients;
performing matrix operation on the spectrum vector and the space vector matched with the dimensions to obtain global characteristic representation of the spectrum and the space; normalizing the spectrum and space global feature representation to form an injection coefficient of a space feature map;
step 4-3: space injection;
multiplying the injection coefficient of the spatial feature map with the spatial feature map to obtain a calibrated spatial feature map; adding the calibrated spatial features and the spectrum feature images to obtain a spectrum feature image with enhanced spatial details;
step 5: extracting spatial spectrum combined characteristics;
inputting the spectrum characteristic diagram with enhanced space details into a plurality of residual blocks, and extracting characteristics to obtain a space spectrum combined characteristic diagram;
step 6: up-sampling;
up-sampling is carried out on the spatial combined feature map, so that the spatial dimension of the spatial combined feature map is enlarged by 2 times;
step 7: iterative fusion;
returning to the step 4, taking the spatial spectrum combined characteristic diagram as the spectral characteristic diagram in the step 4, keeping the spatial characteristic diagram unchanged, and re-executing the steps 4 to 6 to obtain a new spatial dimension 4-time enlarged spatial spectrum combined characteristic diagram;
step 8: and (3) traversing the paired multispectral image blocks and full-color image blocks with the same content obtained in the step (1), executing the steps (2) to (7), and splicing the spatial spectrum combined feature images generated by all the paired multispectral image blocks and full-color image blocks to obtain the image with high spatial resolution and high spectral resolution.
2. The method for fusion of multispectral and panchromatic remote sensing images based on pyramid modulation injection according to claim 1, wherein the linear mapping 1x1 convolution linear mapping in step 4-1.
3. The method of claim 1, wherein the upsampling is one of pyramid progressive upsampling, interpolation upsampling, transposed convolution upsampling, and channel shuffling upsampling.
CN202111557631.XA 2021-12-19 2021-12-19 Multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection Active CN114240814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111557631.XA CN114240814B (en) 2021-12-19 2021-12-19 Multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111557631.XA CN114240814B (en) 2021-12-19 2021-12-19 Multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection

Publications (2)

Publication Number Publication Date
CN114240814A CN114240814A (en) 2022-03-25
CN114240814B true CN114240814B (en) 2024-02-23

Family

ID=80758875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111557631.XA Active CN114240814B (en) 2021-12-19 2021-12-19 Multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection

Country Status (1)

Country Link
CN (1) CN114240814B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120000736A (en) * 2010-06-28 2012-01-04 서울대학교산학협력단 A method for pan-sharpening of high-spatial resolution satellite image by using parameter reflecting spectral and spatial characteristics of image
CN109767412A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 A kind of remote sensing image fusing method and system based on depth residual error neural network
CN112669248A (en) * 2020-12-28 2021-04-16 西安电子科技大学 Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
CN113689370A (en) * 2021-07-27 2021-11-23 南京信息工程大学 Remote sensing image fusion method based on deep convolutional neural network
CN113763299A (en) * 2021-08-26 2021-12-07 中国人民解放军军事科学院国防工程研究院工程防护研究所 Panchromatic and multispectral image fusion method and device and application thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120000736A (en) * 2010-06-28 2012-01-04 서울대학교산학협력단 A method for pan-sharpening of high-spatial resolution satellite image by using parameter reflecting spectral and spatial characteristics of image
CN109767412A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 A kind of remote sensing image fusing method and system based on depth residual error neural network
CN112669248A (en) * 2020-12-28 2021-04-16 西安电子科技大学 Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
CN113689370A (en) * 2021-07-27 2021-11-23 南京信息工程大学 Remote sensing image fusion method based on deep convolutional neural network
CN113763299A (en) * 2021-08-26 2021-12-07 中国人民解放军军事科学院国防工程研究院工程防护研究所 Panchromatic and multispectral image fusion method and device and application thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
自适应权重注入机制遥感图像融合;方帅;潮蕾;曹风云;;中国图象图形学报;20200316(第03期);全文 *

Also Published As

Publication number Publication date
CN114240814A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN110660038B (en) Multispectral image and full-color image fusion method based on generation countermeasure network
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN104112263B (en) The method of full-colour image and Multispectral Image Fusion based on deep neural network
CN111127374B (en) Pan-sharing method based on multi-scale dense network
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
Hu et al. Pan-sharpening via multiscale dynamic convolutional neural network
CN112488978A (en) Multi-spectral image fusion imaging method and system based on fuzzy kernel estimation
Yang et al. License plate image super-resolution based on convolutional neural network
CN113327218A (en) Hyperspectral and full-color image fusion method based on cascade network
CN108492249A (en) Single frames super-resolution reconstruction method based on small convolution recurrent neural network
CN115861083B (en) Hyperspectral and multispectral remote sensing fusion method for multiscale and global features
Cao et al. New architecture of deep recursive convolution networks for super-resolution
CN111861884A (en) Satellite cloud image super-resolution reconstruction method based on deep learning
Hu et al. Hyperspectral image super resolution based on multiscale feature fusion and aggregation network with 3-D convolution
Huang et al. Deep Gaussian scale mixture prior for image reconstruction
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN114266957A (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN116309227A (en) Remote sensing image fusion method based on residual error network and spatial attention mechanism
CN115311184A (en) Remote sensing image fusion method and system based on semi-supervised deep neural network
CN113744134B (en) Hyperspectral image super-resolution method based on spectrum unmixed convolution neural network
CN114140357A (en) Multi-temporal remote sensing image cloud region reconstruction method based on cooperative attention mechanism
Zhao et al. SSIR: Spatial shuffle multi-head self-attention for single image super-resolution
CN114240814B (en) Multispectral and panchromatic remote sensing image fusion method based on pyramid modulation injection
CN111899166A (en) Medical hyperspectral microscopic image super-resolution reconstruction method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant