CN111563866A - Multi-source remote sensing image fusion method - Google Patents

Multi-source remote sensing image fusion method Download PDF

Info

Publication number
CN111563866A
CN111563866A CN202010378705.2A CN202010378705A CN111563866A CN 111563866 A CN111563866 A CN 111563866A CN 202010378705 A CN202010378705 A CN 202010378705A CN 111563866 A CN111563866 A CN 111563866A
Authority
CN
China
Prior art keywords
image
component
multispectral
transformation
multispectral image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010378705.2A
Other languages
Chinese (zh)
Other versions
CN111563866B (en
Inventor
李晓玲
聂祥飞
黄海波
张月
冯丽源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Three Gorges University
Original Assignee
Chongqing Three Gorges University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Three Gorges University filed Critical Chongqing Three Gorges University
Priority to CN202010378705.2A priority Critical patent/CN111563866B/en
Publication of CN111563866A publication Critical patent/CN111563866A/en
Application granted granted Critical
Publication of CN111563866B publication Critical patent/CN111563866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a multi-source remote sensing image fusion method, which comprises the following steps: firstly, obtaining a brightness I component, a chroma H component and a saturation S component of an up-sampled multispectral image by IHS transformation; the I component is filtered by adopting guide filtering, and self-adaptive fractional order differential is constructed to be used for enhancing edge detail information in the full-color image; respectively performing wavelet transformation on the filtered multispectral image I component and the enhanced panchromatic image to obtain a high-frequency component and a low-frequency component of the multispectral image I component and the enhanced panchromatic image, wherein the high-frequency component adopts a principle of taking a large absolute value, and the low-frequency component adopts a weighted average principle; and taking the result obtained by wavelet inverse transformation as a new I component, and finally obtaining a fused image by utilizing IHS inverse transformation. The method combines the guide filtering and fractional order differentiation to fuse the full-color image and the multispectral image, effectively inhibits the spectrum distortion phenomenon of the fusion result, and reduces the loss of space detail information.

Description

Multi-source remote sensing image fusion method
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-source remote sensing image fusion method.
Background
The multi-source remote sensing image fusion is an image processing technology for performing information complementary superposition on two or more remote sensing images from different sensors in the same scene to obtain a comprehensive image with more accurate and complete information. The image fusion is not only an important component of remote sensing detection data processing, but also has wide application space in the fields of environment detection, urban planning, military reconnaissance and the like. In recent years, with the development of signal processing technology, researchers have conducted a great deal of research into image fusion methods.
At present, the multi-source remote sensing image fusion is mainly divided into three levels: pixel layer fusion, feature layer fusion, and decision layer fusion. Compared with the characteristic layer fusion and the decision layer fusion, the pixel layer fusion has better performance in the aspects of accuracy and timeliness. The fusion method for the multi-source remote sensing image of the pixel layer mainly comprises three types: component substitution-based image fusion, multi-resolution analysis-based image fusion, and pattern-based image fusion. The image fusion method based on component substitution has the characteristics of low complexity and easiness in implementation, the space detail information of the fusion result is well stored, but the processing process involves space transformation operation, so that spectral distortion often occurs; the image fusion method based on multi-resolution analysis is not easy to generate the phenomenon of spectral distortion, but the ringing phenomenon is often generated in the fusion process, so that the spatial characteristics of the fusion result are lost; the image fusion method based on the model is not easy to lose spectral distortion and spatial features, but the complexity of the method involved in the fusion process is high. At present, although image fusion methods are emerging, many problems and difficulties still exist, which are mainly reflected in the following aspects: 1) spatial information of the fused image is easy to lose; 2) the spectrum of the fused image is prone to distortion.
Therefore, solving the problems of spatial information loss and spectral distortion in the multi-source remote sensing image fusion process becomes one of the important problems for technicians in the field.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the multi-source remote sensing image fusion method is provided to effectively inhibit the spectrum distortion phenomenon of the fusion result and reduce the loss of spatial detail information.
The technical scheme adopted by the invention for realizing the purpose comprises the following steps:
the method comprises the following steps: acquiring a multispectral image and a panchromatic image of the same ground object target, and performing up-sampling on the multispectral image by using a bicubic interpolation method to keep the size of the multispectral image consistent with that of the panchromatic image;
step two: performing color space transformation on the multispectral image by adopting IHS transformation, extracting a brightness I component of the multispectral image for further processing, and reserving a chromaticity H component and a saturation S component of the multispectral image for subsequent IHS inverse transformation;
step three: constructing self-adaptive fractional order differential for enhancing the edge details of the full-color image and retaining the contour information of the ground features, and meanwhile, carrying out filtering processing on the brightness I component of the multispectral image by adopting guide filtering;
step four: respectively performing wavelet transformation on the full-color image subjected to fractional order differential processing and the multispectral image brightness I component subjected to guided filtering processing to obtain transformed high-frequency and low-frequency components, wherein the high-frequency component adopts a principle of taking a large absolute value, and the low-frequency component adopts a weighted average principle;
step five: obtaining a result image after wavelet inverse transformation through wavelet reconstruction;
step six: taking the result obtained by wavelet inverse transformation as InewAnd performing IHS inverse transformation on the components, the H component and the S component to obtain a fused image.
The invention has the following advantages and beneficial effects:
1. compared with the prior art, the method has the advantages that the brightness component of the multispectral image after up-sampling is directly processed, and the blocking effect is easy to generate; the invention introduces the guide filtering with the structure transfer characteristic and the edge-preserving smooth characteristic, and realizes the inhibition of the blocking effect in the image fusion process. Meanwhile, the spatial texture information and the local detail information of the I component of the multispectral image are enhanced, so that the visual effect of the fused image is improved in an auxiliary mode.
2. Compared with the prior art, the method has the advantages that the problems of reduction of gray level and loss of partial details are caused by direct histogram matching of the brightness component and the full-color image; the invention introduces the fractional order differential into the image fusion, and particularly improves the order of the fractional order differential by combining the image statistical characteristics, thereby avoiding the manual setting of the fixed order and realizing the fractional order differential with self-adaptability. Not only can the image be effectively reserved as a flat part of the base layer, but also the edge detail part of the image is enhanced.
Drawings
FIG. 1 is a flow chart of image fusion according to the method of the present invention.
FIG. 2 is a test image used in experiments by the method of the present invention, and a comparison graph of the fusion effect of different methods on the test image.
Fig. 2(a) is a full-color image, fig. 2(b) is a multispectral image, fig. 2(c) is a fused image obtained by the IHS transform method, fig. 2(d) is a fused image obtained by the Brovey method, fig. 2(e) is a fused image obtained by the PCA method, fig. 2(f) is a fused image obtained by the DWT method, fig. 2(g) is a fused image obtained by the ATWT-M3 method, fig. 2(h) is a fused image obtained by the ATWT method, fig. 2(i) is a fused image obtained by the AWLP method, fig. 2(j) is a fused image obtained by the GS method, fig. 2(k) is a fused image obtained by the HPF method, fig. 2(l) is a fused image obtained by the MTF-GLP method, and fig. 2(M) is a fused image obtained by the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
As shown in fig. 1, a multi-source remote sensing image fusion method includes the following steps:
the method comprises the following steps: acquiring a multispectral image and a panchromatic image of the same ground object target, and performing up-sampling on the multispectral image by using a bicubic interpolation method to keep the size of the multispectral image consistent with that of the panchromatic image.
Step two: performing color space transformation on the multispectral image by adopting IHS transformation, extracting a brightness I component of the multispectral image for further processing, reserving a chromaticity H component and a saturation S component of the multispectral image for subsequent IHS inverse transformation, wherein the calculation formulas of the I component, the H component and the S component are respectively as follows:
Figure BDA0002481223390000031
H=tan-1[s1/s2](2)
Figure BDA0002481223390000032
r, G, B are the red, green and blue bands of the multispectral image.
Step three: constructing an adaptive fractional order differential for enhancing the edge details of the full-color image and preserving the ground feature profile information, wherein the specific mode is as follows:
assuming that the size of the panchromatic image f (i, j) is M × N, the spatial frequency calculation formula is:
Figure BDA0002481223390000041
Figure BDA0002481223390000042
Figure BDA0002481223390000043
where RF and CF represent the row and column frequencies, respectively, of a full-color image f (i, j). The larger the spatial frequency value of the image is, the richer the spatial information of the image is, and the stronger the layering sense of the image is. Further, the calculation formula of the average gradient of the full-color image f (i, j) is:
Figure BDA0002481223390000044
the larger the value of the average gradient is, the more prominent the detail features such as edges, textures and the like of the image are, and the higher the definition is. Next, the spatial frequency and average gradient of the panchromatic image are normalized using an inverse cotangent nonlinear normalization function, that is:
Figure BDA0002481223390000045
Figure BDA0002481223390000046
considering that the magnitude of the spatial frequency and the average gradient of the image are equally important to the influence of the differential order, the following average weighting processing is performed on the two:
Figure BDA0002481223390000047
because the Tanh function is a monotone increasing function in a real number range and presents a nonlinear increasing trend, the characteristic accords with the change rule of the differential order along with the image statistical information, and the Tanh function is adopted to construct the order function as follows.
Figure BDA0002481223390000051
When the fractional order of differentiation v is larger than 0.5 and 0.7, the texture details of the image can be more highlighted, and the image contour information is fully kept. Therefore, the invention carries out correction processing on f (Y), wherein beta and alpha respectively take 0.5 and 0.7, and further obtains a calculation function v of the self-adaptive fractional order differential order.
Figure BDA0002481223390000052
Meanwhile, the method adopts guide filtering to filter the brightness I component of the multispectral image, and comprises the following specific steps:
first, the radius r of the guided filter is set to 7, and the regularization parameter is set to 10-6. Calculating linear coefficient a of guided filtering(k,l)And b(k,l)The values of (A) are respectively:
Figure BDA0002481223390000053
Figure BDA0002481223390000054
in the formula, | ω | represents a rectangular local window ω having a pixel point (k, l) as a center and r as a radius(k,l)The number of the pixels in the pixel array,
Figure BDA0002481223390000055
and mu(k,l)Respectively, the local window omega(k,l)The variance and mean of the contained pixels,
Figure BDA0002481223390000056
representing the local window omega(k,l)The pixel mean of the multispectral image I component is the regularization parameter. Consider that during the process of guided filtering, a pixel (i, j) may be simultaneously filtered by multiple local windows ω(k,l)And (4) sliding. Thus, the linear coefficient a is required(k,l)And b(k,l)Carrying out average value processing, namely:
Figure BDA0002481223390000057
Figure BDA0002481223390000061
will be provided with
Figure BDA0002481223390000062
And
Figure BDA0002481223390000063
and substituting the linear definition model for guiding the filtering so as to obtain a filtered output image.
Figure BDA0002481223390000064
And taking the result after the guiding filtering as a basic layer of the image, subtracting the multispectral image I component from the basic image to obtain a detail layer of the image, then performing linear transformation on the gray level change range of the detail layer, and finally adding the detail layer and the basic image to obtain the texture structure enhanced image.
Step four: and respectively carrying out wavelet transformation on the full-color image after fractional order differential processing and the multispectral image brightness I component after guide filtering processing to obtain high-frequency and low-frequency components after transformation. The high-frequency component adopts the principle that the absolute value is large, and the low-frequency component adopts the weighted average principle.
Step five: and obtaining a result image after wavelet inverse transformation through wavelet reconstruction.
Step six: taking the result obtained by wavelet inverse transformation as InewAnd performing IHS inverse transformation on the component, the H component and the S component to obtain a fused image, wherein the calculation formula is as follows:
Figure BDA0002481223390000065
wherein R isnew、Gnew、BnewThe three wave bands of red, green and blue of the fused image are respectively.
The invention selects registered multispectral and panchromatic images as test images and studies in comparison with IHS, Brovey, PCA, DWT, ATWT-M3, ATWT, AWLP, GS, HPF, and MTF-GLP methods.
The experimental results are as follows:
experiment 1, the fusion results of different methods on the test images are shown in fig. 2. After comparison and analysis, the fusion result obtained by the method enhances the detail texture of the ground objects in the image while keeping the spectral characteristics of the image, so that the image has higher definition and better visual effect.
Experiment 2, in order to improve the accuracy of the quality evaluation of the fusion result, the method of the invention adopts several common evaluation indexes, including: mean gradient (AG), Mean (ME), Standard Deviation (SD), entropy of Information (IE), Mutual Information (MI), and Spatial Frequency (SF). As the numerical value of the evaluation index is larger, the spatial information of the image is more abundant, and the sense of gradation is stronger, as shown in table 1. As can be seen from Table 1, compared with other methods, the fusion result obtained by the method of the present invention has improved various quality evaluation indexes to different degrees, and has certain comprehensive advantages. The result shows that the method of the invention fuses the image with richer details and better visual effect.
Table 1 statistical table of image fusion result quality evaluation indexes
Figure BDA0002481223390000071
In summary, the multi-source remote sensing image fusion method disclosed by the invention can effectively reduce the problems of spectral distortion phenomenon and spatial detail information loss of the fusion result, and has higher effectiveness and feasibility.
Although the preferred embodiments of the present invention have been described in detail, the scope of the present invention should not be limited to the above embodiments, and other modifications can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (5)

1. A multi-source remote sensing image fusion method is characterized by comprising the following steps:
the method comprises the following steps: acquiring a multispectral image and a panchromatic image of the same ground object target, and performing up-sampling on the multispectral image by using a bicubic interpolation method to keep the size of the multispectral image consistent with that of the panchromatic image;
step two: performing color space transformation on the multispectral image by adopting IHS transformation, extracting a brightness I component of the multispectral image for further processing, and reserving a chromaticity H component and a saturation S component of the multispectral image for subsequent IHS inverse transformation;
step three: constructing self-adaptive fractional order differential for enhancing the edge details of the full-color image and retaining the contour information of the ground features, and meanwhile, carrying out filtering processing on the brightness I component of the multispectral image by adopting guide filtering;
step four: respectively performing wavelet transformation on the full-color image subjected to fractional order differential processing and the multispectral image brightness I component subjected to guided filtering processing to obtain transformed high-frequency and low-frequency components, wherein the high-frequency component adopts a principle of taking a large absolute value, and the low-frequency component adopts a weighted average principle;
step five: obtaining a result image after wavelet inverse transformation through wavelet reconstruction;
step six: taking the result obtained by wavelet inverse transformation as InewAnd performing IHS inverse transformation on the components, the H component and the S component to obtain a fused image.
2. The multi-source remote sensing image fusion method according to claim 1, wherein the luminance I component, the chrominance H component, and the saturation S component obtained in the second step are respectively:
Figure FDA0002481223380000011
H=tan-1[s1/s2]
Figure FDA0002481223380000012
r, G, B are the red, green and blue bands of the multispectral image.
3. The multi-source remote sensing image fusion method according to claim 1, wherein the adaptive fractional order differentiation method adopted in the third step is specifically as follows:
first, the spatial frequency and average gradient of the panchromatic image f (i, j) are calculated, wherein the spatial frequency is obtained using the following formula:
Figure FDA0002481223380000021
Figure FDA0002481223380000022
Figure FDA0002481223380000023
wherein the size of the panchromatic image f (i, j) is M N, and RF and CF represent the row frequency and column frequency of the panchromatic image f (i, j), respectively; in addition, the calculation formula of the average gradient of the full-color image f (i, j) is:
Figure FDA0002481223380000024
next, the spatial frequency and average gradient of the panchromatic image are normalized using an inverse cotangent nonlinear normalization function, that is:
Figure FDA0002481223380000025
Figure FDA0002481223380000026
the following average weighting processing is continuously carried out on the two:
Figure FDA0002481223380000027
finally, the differential order v is constructed as follows using the Tanh function:
Figure FDA0002481223380000028
Figure FDA0002481223380000029
wherein, beta and alpha are respectively 0.5 and 0.7.
4. The multi-source remote sensing image fusion method according to claim 1, wherein the specific way of filtering the I component of the multispectral image by using the guided filtering in the third step is as follows:
first, the radius r of the guided filter is set to 7, and the regularization parameter is set to 10-6Calculating the linear coefficient a of the guided filtering(k,l)And b(k,l)The values of (A) are respectively:
Figure FDA0002481223380000031
Figure FDA0002481223380000032
in the formula, | ω | represents a rectangular local window ω having a pixel point (k, l) as a center and r as a radius(k,l)The number of the pixels in the pixel array,
Figure FDA0002481223380000033
and mu(k,l)Respectively, the local window omega(k,l)The variance and mean of the contained pixels,
Figure FDA0002481223380000034
representing the local window omega(k,l)The pixel mean value of the component I of the contained multispectral image is a regularization parameter;
linear coefficient of alignment a(k,l)And b(k,l)Carrying out average value processing, namely:
Figure FDA0002481223380000035
Figure FDA0002481223380000036
handle
Figure FDA0002481223380000037
And
Figure FDA0002481223380000038
substituting the linear definition model for guiding filtering to obtain filtered outputAnd (3) image outputting:
Figure FDA0002481223380000039
and taking the result after the guiding filtering as a basic layer of the image, subtracting the multispectral image I component from the basic image to obtain a detail layer of the image, then performing linear transformation on the gray level change range of the detail layer, and finally adding the detail layer and the basic image to obtain the texture structure enhanced image.
5. The multi-source remote sensing image fusion method according to claim 1, wherein step six is to obtain InewAnd performing IHS inverse transformation on the component, the H component and the S component, wherein the calculation formula of the final fusion image is as follows:
Figure FDA0002481223380000041
wherein R isnew、Gnew、BnewThe three wave bands of red, green and blue of the fused image are respectively.
CN202010378705.2A 2020-05-07 2020-05-07 Multisource remote sensing image fusion method Active CN111563866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010378705.2A CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010378705.2A CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Publications (2)

Publication Number Publication Date
CN111563866A true CN111563866A (en) 2020-08-21
CN111563866B CN111563866B (en) 2023-05-12

Family

ID=72070788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010378705.2A Active CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Country Status (1)

Country Link
CN (1) CN111563866B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330581A (en) * 2020-11-02 2021-02-05 燕山大学 Fusion method and system of SAR and multispectral image
CN113992838A (en) * 2021-08-09 2022-01-28 中科联芯(广州)科技有限公司 Imaging focusing method and control method of silicon-based multispectral signal
CN114897757A (en) * 2022-06-10 2022-08-12 大连民族大学 Remote sensing image fusion method based on NSST and parameter self-adaptive PCNN

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216557A (en) * 2007-12-27 2008-07-09 复旦大学 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
KR20090096142A (en) * 2008-03-07 2009-09-10 한국항공우주연구원 Satellite image fusion method and system
CN101930604A (en) * 2010-09-08 2010-12-29 中国科学院自动化研究所 Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis
CN103679661A (en) * 2013-12-25 2014-03-26 北京师范大学 Significance analysis based self-adaptive remote sensing image fusion method
CN104346790A (en) * 2014-10-30 2015-02-11 中山大学 Remote sensing image fusion method through combining HCS with wavelet transform
CN104851077A (en) * 2015-06-03 2015-08-19 四川大学 Adaptive remote sensing image panchromatic sharpening method
CN104851091A (en) * 2015-04-28 2015-08-19 中山大学 Remote sensing image fusion method based on convolution enhancement and HCS transform
CN105741252A (en) * 2015-11-17 2016-07-06 西安电子科技大学 Sparse representation and dictionary learning-based video image layered reconstruction method
CN106023129A (en) * 2016-05-26 2016-10-12 西安工业大学 Infrared and visible light image fused automobile anti-blooming video image processing method
CN108874857A (en) * 2018-04-13 2018-11-23 重庆三峡学院 A kind of local records document is compiled and digitlization experiencing system
CN108921809A (en) * 2018-06-11 2018-11-30 上海海洋大学 Multispectral and panchromatic image fusion method under integral principle based on spatial frequency
CN109166089A (en) * 2018-07-24 2019-01-08 重庆三峡学院 The method that a kind of pair of multispectral image and full-colour image are merged
US10176966B1 (en) * 2017-04-13 2019-01-08 Fractilia, Llc Edge detection system
CN109993717A (en) * 2018-11-14 2019-07-09 重庆邮电大学 A kind of remote sensing image fusion method of combination guiding filtering and IHS transformation

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216557A (en) * 2007-12-27 2008-07-09 复旦大学 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
KR20090096142A (en) * 2008-03-07 2009-09-10 한국항공우주연구원 Satellite image fusion method and system
CN101930604A (en) * 2010-09-08 2010-12-29 中国科学院自动化研究所 Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis
CN103679661A (en) * 2013-12-25 2014-03-26 北京师范大学 Significance analysis based self-adaptive remote sensing image fusion method
CN104346790A (en) * 2014-10-30 2015-02-11 中山大学 Remote sensing image fusion method through combining HCS with wavelet transform
CN104851091A (en) * 2015-04-28 2015-08-19 中山大学 Remote sensing image fusion method based on convolution enhancement and HCS transform
CN104851077A (en) * 2015-06-03 2015-08-19 四川大学 Adaptive remote sensing image panchromatic sharpening method
CN105741252A (en) * 2015-11-17 2016-07-06 西安电子科技大学 Sparse representation and dictionary learning-based video image layered reconstruction method
CN106023129A (en) * 2016-05-26 2016-10-12 西安工业大学 Infrared and visible light image fused automobile anti-blooming video image processing method
US10176966B1 (en) * 2017-04-13 2019-01-08 Fractilia, Llc Edge detection system
CN108874857A (en) * 2018-04-13 2018-11-23 重庆三峡学院 A kind of local records document is compiled and digitlization experiencing system
CN108921809A (en) * 2018-06-11 2018-11-30 上海海洋大学 Multispectral and panchromatic image fusion method under integral principle based on spatial frequency
CN109166089A (en) * 2018-07-24 2019-01-08 重庆三峡学院 The method that a kind of pair of multispectral image and full-colour image are merged
CN109993717A (en) * 2018-11-14 2019-07-09 重庆邮电大学 A kind of remote sensing image fusion method of combination guiding filtering and IHS transformation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330581A (en) * 2020-11-02 2021-02-05 燕山大学 Fusion method and system of SAR and multispectral image
CN112330581B (en) * 2020-11-02 2022-07-12 燕山大学 Fusion method and system of SAR and multispectral image
CN113992838A (en) * 2021-08-09 2022-01-28 中科联芯(广州)科技有限公司 Imaging focusing method and control method of silicon-based multispectral signal
CN114897757A (en) * 2022-06-10 2022-08-12 大连民族大学 Remote sensing image fusion method based on NSST and parameter self-adaptive PCNN

Also Published As

Publication number Publication date
CN111563866B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN109272010B (en) Multi-scale remote sensing image fusion method based on convolutional neural network
CN111563866B (en) Multisource remote sensing image fusion method
CN109102469B (en) Remote sensing image panchromatic sharpening method based on convolutional neural network
CN105915909B (en) A kind of high dynamic range images layered compression method
CN109191390A (en) A kind of algorithm for image enhancement based on the more algorithm fusions in different colours space
CN109447922B (en) Improved IHS (induction heating system) transformation remote sensing image fusion method and system
CN111260580B (en) Image denoising method, computer device and computer readable storage medium
CN108921809B (en) Multispectral and panchromatic image fusion method based on spatial frequency under integral principle
CN107958450B (en) Panchromatic multispectral image fusion method and system based on self-adaptive Gaussian filtering
Liu et al. Low-light video image enhancement based on multiscale retinex-like algorithm
CN109389560A (en) A kind of adaptive weighted filter image denoising method, device and image processing equipment
Wen et al. An effective network integrating residual learning and channel attention mechanism for thin cloud removal
CN115861083B (en) Hyperspectral and multispectral remote sensing fusion method for multiscale and global features
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
CN111882485B (en) Hierarchical feature feedback fusion depth image super-resolution reconstruction method
CN113129300A (en) Drainage pipeline defect detection method, device, equipment and medium for reducing false detection rate
CN111815548A (en) Medium-long wave dual-waveband infrared image fusion method
CN110084774B (en) Method for minimizing fusion image by enhanced gradient transfer and total variation
Sadia et al. Color image enhancement using multiscale retinex with guided filter
Yamaguchi et al. Image demosaicking via chrominance images with parallel convolutional neural networks
CN116109535A (en) Image fusion method, device and computer readable storage medium
CN111080560B (en) Image processing and identifying method
CN114897757A (en) Remote sensing image fusion method based on NSST and parameter self-adaptive PCNN
CN113989137A (en) Method for extracting pigmentation of facial skin image and forming spectrum of brown region

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant