CN117372276B - Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering - Google Patents

Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering Download PDF

Info

Publication number
CN117372276B
CN117372276B CN202311639653.XA CN202311639653A CN117372276B CN 117372276 B CN117372276 B CN 117372276B CN 202311639653 A CN202311639653 A CN 202311639653A CN 117372276 B CN117372276 B CN 117372276B
Authority
CN
China
Prior art keywords
image
full
multispectral
spatial resolution
side window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311639653.XA
Other languages
Chinese (zh)
Other versions
CN117372276A (en
Inventor
宋延嵩
董科研
郝群
张博
朴明旭
刘天赐
梁宗林
翟东航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202311639653.XA priority Critical patent/CN117372276B/en
Publication of CN117372276A publication Critical patent/CN117372276A/en
Application granted granted Critical
Publication of CN117372276B publication Critical patent/CN117372276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a full-color sharpening method for fusion of multispectral and full-color images based on side window filtering, and belongs to the technical field of multispectral image processing. First, a low spatial resolution panchromatic image is constructed by projecting a multispectral image, calculating weight values. Next, an improved high spatial resolution panchromatic image is generated by GS transformation and statistic adjustment. Finally, SWF model is introduced for filtering to obtain better edge details. The method not only improves the spatial resolution of the image, but also reserves spectral information and spatial details, and has wide application field.

Description

Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering
Technical Field
The invention relates to a full-color sharpening method for fusion of multispectral and full-color images based on side window filtering, and belongs to the technical field of multispectral image processing.
Background
Multispectral images have more complete spatial and spectral information than RGB images, and are widely used in the fields of medical imaging, food detection, face recognition and the like. How to increase the resolution of a multispectral image becomes a problem because the resolution of the multispectral image is low. Panchromatic sharpening refers to combining Multispectral (MS) images with panchromatic data having higher spatial resolution. From the early stage of this research topic, the problem of quality assessment plays a central role in the relevant literature, driving researchers to conduct extensive research. The solution to this problem is not singular because of its ill-posed nature. In fact, no reference image is available for comparison with the results of the fusion process.
Since the low-pass filter model typically removes edge detail information when used as an image filtering process. The SWF model capable of remarkably improving the image edge retaining capacity is provided in the prior art, and inspired by the model, the SWF model is applied to a PS process, so that effective and proper detail features can be better extracted from an image, and a foundation is laid for improving the spatial resolution of an MS fusion image and maintaining spectral information after information injection. Meanwhile, the detail information is further increased in consideration of the fact that the fusion image cannot only see the MS image and also in consideration of the relation of the PAN image. In this regard, the manner of processing the inside of the weight function through low-pass filtering is replaced by the manner of SWF filtering, so that effective and proper detail features can be better maintained, and the filtered image is put into the weight function, so that the detail information of the intensity component is enhanced. Through the above strategy, details of image fusion are increased.
Disclosure of Invention
The invention provides a full-color sharpening method based on multispectral and full-color image fusion of side window filtering, which aims to solve the problems that the multispectral image spatial resolution is low in the prior art, and the multispectral image spatial resolution is improved through multispectral and full-color image fusion.
The full-color sharpening method based on the fusion of the multispectral and the full-color image of the side window filter comprises the following steps:
s100, projecting an MS image into a new space, and simulating and constructing a Pan image with low spatial resolution by calculating fusion weighted values of multispectral images with different weights;
s200, performing GS conversion on a Pan image with low spatial resolution of an analog structure, and performing GS conversion on a plurality of resampled spectrum band images with low spatial resolution under the same scale, wherein the Pan image with low spatial resolution of the analog structure is used as a first band in the GS conversion;
s300, adjusting statistics of the Pan image with high spatial resolution to enable the statistics of the Pan image with high spatial resolution to be matched with statistics of a first transformation band obtained through GS transformation, so that an improved Pan image with high spatial resolution is obtained;
s400, replacing a 1 st conversion wave band obtained by GS conversion with the improved high-spatial resolution Pan image to generate a new conversion wave band group;
s500, performing GS inverse transformation on the new transformation waveband group to obtain an enhanced spatial resolution MS image;
s600, introducing an SWF model, replacing an original low-pass filtering mode in the weight function with the SWF filtering mode, and putting the filtered image into the weight function to obtain better edge details.
Further, in S100, the method includes the following steps:
s110, applying a transformation algorithm to the multispectral image so as to generate a series of new images, wherein each image represents a specific feature in the original multispectral image;
s120, analyzing each image component obtained through transformation, and identifying a component containing a main spatial structure and a component containing main spectral information;
s130, separating components of the spatial structure and components of the spectrum information.
Further, in S110, the transformation algorithm is PCA or IHS.
Further, in S300, S400 and S500,
the component replacement fusion process is strongly simplified without explicit computation of forward and backward transforms, for eachDescribed by the following equation:
(1)
wherein,the image from the multispectral image to the full-color image scale after interpolation is represented, and the subscript k represents the kth spectral image; />Is an injection gain matrix, stacked in a multi-dimensional array G, P representing a PAN image; matrix multiplication is point-wise, finally, +.>Is a so-called intensity component, defined as
(2)
Wherein the weight vectorFor the first row of the forward transform matrix,
the GS orthogonalization process uses intensity componentsOrthogonalization processes one MS vector at a time as the first vector of the new orthonormal basis, finds the projection of the MS vector on a plane or hyperplane defined by the previously found orthonormal vector and its orthonormal components such that the sum of the orthonormal component and the projected component is equal to the zero-mean version of the original vectorized band by substituting with histogram matched P before performing the inverse transform>To complete full color sharpening, therefore, the fusion process is described by (1), in whichThe injection gain is:
(3)
wherein 1 is a full matrix;representing the covariance between the two images X and Y; />Is the variance of X.
Further, in S600, the method includes the following steps:
s610, firstly determining the radius of a square window;
s620, calculating 8 side window separation kernels which are left L, right R, upper U, lower D, northwest NW, northeast NE, southwest SW and southeast SE side windows according to the radius;
s630, calculating projection distances of all side windows according to calculation results of 8 side window separation kernels;
s640, obtaining the minimum signed distance of 8 side windows, and taking the minimum signed distance as the output of the current window;
and S650, finally, taking the sliding window as a unit, sequentially moving the window to perform side window processing on the MS image, applying a side window filter to perform weighted average on the MS wave band, and comparing the results.
A storage medium having a computer program stored thereon, the computer program when executed by a processor implementing a full color sharpening method for fusion of multispectral and full color images based on side window filtering as described above.
A computer device, comprising: the full-color sharpening method comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the full-color sharpening method based on multispectral and full-color image fusion of side window filtering.
The invention has the beneficial effects that:
the invention provides a full-color sharpening method for fusion of multispectral and full-color images based on side window filtering, which mainly solves the problem of low spatial resolution of the existing multispectral images. The method optimizes spectrum matching processing by effectively separating spatial structure and spectrum information of a multispectral image, improves detail expression by applying full-color sharpening technology, and restores an image of an original space by inverse transformation. In addition, the method innovatively uses a GS orthogonalization process and an SWF model to improve image edge details and generate an efficient weight function by minimizing mean square error. This approach not only improves the spatial resolution of the image, but also preserves spectral information and spatial detail.
Drawings
Fig. 1 is a PAN image;
FIG. 2 is a low pass filtered image;
FIG. 3 is a SWF filtered image;
fig. 4 is a PAN image in a dataset;
FIG. 5 is a multispectral image in a dataset;
FIG. 6 is a fusion image of different algorithms;
FIG. 7 is a method flow diagram of a full color sharpening method of the invention based on side window filtering for multi-spectral and full color image fusion.
Detailed Description
Referring to fig. 7, the full-color sharpening method based on the fusion of the multispectral and full-color image of the side window filter comprises the following steps:
s100, projecting an MS image into a new space, and simulating and constructing a Pan image with low spatial resolution by calculating fusion weighted values of multispectral images with different weights;
s200, performing GS transformation on the Pan image with the low spatial resolution of the analog structure, and performing GS transformation on a plurality of resampled spectrum band images with the low spatial resolution under the same scale, wherein the Pan image with the low spatial resolution of the analog structure is used as a first band in the GS transformation;
s300, adjusting statistics of the Pan image with high spatial resolution to enable the statistics of the Pan image with high spatial resolution to be matched with statistics of a first transformation band obtained through GS transformation, so that an improved Pan image with high spatial resolution is obtained;
s400, replacing the 1 st conversion wave band obtained by GS conversion with the improved high-spatial resolution Pan image to generate a new conversion wave band group;
s500, carrying out GS inverse transformation on the new transformation waveband group to obtain an enhanced spatial resolution MS image;
s600, introducing an SWF model, replacing an original low-pass filtering mode in the weight function with the SWF filtering mode, and putting the filtered image into the weight function to obtain better edge details.
Specifically, in S100, spatial reconstruction of a multispectral image is involved, and constructing a Pan image (full-color image) with low spatial resolution is simulated by projecting the MS image (multispectral image) into a new space and calculating fusion weighting values of the MS image. By the aid of the method, the spatial information of the multispectral image is optimized, the follow-up fusion process is more accurate, and therefore the spatial quality of the final fusion image is improved; in S200, the GS transform and the band simulation are involved, and the GS transform is performed on both the low spatial resolution Pan image and the plurality of resampled low spatial resolution spectral band images of the simulation structure. This is advantageous for separating spectral and spatial information of the image, thereby improving spatial resolution while maintaining spectral information. S300 is to adjust statistics and Pan image improvement, adjust statistics of high spatial resolution Pan image, make it match with statistics of first transformation band that GS transforms, in order to ensure that the panchromatic image after fusion is more identical with original panchromatic image in statistics, further improve the quality of the image. The step S400 is to generate a new transformed band group by replacing the first transformed band obtained by GS transformation with the modified high spatial resolution Pan image. S400 effectively combines the Pan image with high spatial resolution and multispectral information, and lays a foundation for obtaining high-quality fusion images. S500 involves an inverse GS transform and a spatial resolution boost, in particular, performing an inverse GS transform to obtain an MS image with enhanced spatial resolution, which is the key to the overall fusion process, since it directly affects the spatial resolution of the final fusion image. S600, SWF filtering and edge detail enhancement are involved, an SWF model (side window filtering model) is introduced to replace the original low-pass filtering mode, and the filtered image is put into a weight function to obtain better edge detail. And further, the edge definition and detail information of the image are greatly improved, and the spatial resolution of the multispectral image is improved.
Further, in S100, the method includes the following steps:
s110, applying a transformation algorithm to the multispectral image so as to generate a series of new images, wherein each image represents a specific feature in the original multispectral image;
s120, analyzing each image component obtained through transformation, and identifying a component containing a main spatial structure and a component containing main spectral information;
s130, separating components of the spatial structure and components of the spectrum information.
Specifically, S110 applies a transformation algorithm to the multispectral image to generate a series of new images, each representing a particular feature in the original multispectral image. The specific features in the original image can be further analyzed and processed, so that more accurate image fusion is facilitated, and the quality of the final image is improved. S120, identifying the components containing the main spatial structure and the components containing the main spectral information by analyzing the transformed image components can provide more useful information for the subsequent fusion step. In this way, the information of the multispectral image can be better understood and utilized, providing more efficient image processing and fusion. S130, separating the components of the spatial structure and the components of the spectrum information, so that in the subsequent fusion process, the two types of components can be respectively subjected to optimization processing. The processing mode can effectively balance and improve the spatial resolution and the spectral response integrity of the full-color image generated by fusion, so that the fused full-color image has higher spatial resolution and original spectral information is reserved.
Further, in S110, the transformation algorithm is PCA or IHS.
In particular, PCA transformation is a statistical tool used to analyze and simplify the dataset. In image processing, PCA can identify the most important features in multispectral image data, converting these features into a set of linearly uncorrelated variables, i.e., principal components. These principal components typically contain a large portion of the variability of the data so that fewer data sets can be used to approximate the original multispectral data. This is particularly useful for image compression and noise reduction, as it can reduce the amount of data while retaining critical information. PCA is also helpful in improving image contrast and enhancing specific image features, which are important for subsequent image analysis and interpretation.
IHS transformation is a method of converting the RGB color space into the IHS color space. In IHS space, the intensity (I), hue (H), and saturation (S) of the image are separated, which allows the brightness and color information of the image to be independently adjusted. In multispectral image processing, IHS transforms can be used to improve the visual effect of the image, increase the dynamic range of the image, and allow finer color correction and enhancement. For example, in the fusion of multispectral images, IHS transforms may be used to combine high-resolution panchromatic images with multispectral images, thereby improving the spatial resolution of the images while maintaining the original spectral characteristics.
The choice of using PCA or IHS transformation algorithm depends on the particular application requirements and the desired effect. PCA is better suited for data reduction and feature extraction, while IHS is better suited for improving visual performance and fine image adjustment. Both methods can effectively improve the spatial and spectral resolution of the multispectral image, providing more abundant and accurate information for subsequent image analysis and application.
Further, in S300, S400 and S500,
component replacement fusion process is not performedThe need for explicit computation of forward and backward transforms is strongly simplified for eachDescribed by the following equation:
(1)
wherein,the image from the multispectral image to the full-color image scale after interpolation is represented, and the subscript k represents the kth spectral image; />Is an injection gain matrix, stacked in a multi-dimensional array G, P representing a PAN image; matrix multiplication is point-wise, finally, +.>Is a so-called intensity component, defined as
(2)
Wherein the weight vectorFor the first row of the forward transform matrix,
the GS orthogonalization process uses intensity componentsOrthogonalization processes one MS vector at a time as the first vector of the new orthonormal basis, finds the projection of the MS vector on a plane or hyperplane defined by the previously found orthonormal vector and its orthonormal components such that the sum of the orthonormal component and the projected component is equal to the zero-mean version of the original vectorized band by substituting with histogram matched P before performing the inverse transform>To complete full color sharpening, therefore, the fusion process is described by (1), in whichThe injection gain is:
(3)
wherein 1 is a full matrix;representing the covariance between the two images X and Y; />Is the variance of X.
Specifically, in the invention, the component replacement fusion process is greatly simplified, and the need of explicitly calculating forward and backward conversion is avoided. Here, the fusion process is actually a combination of the multispectral image (interpolated to the panchromatic image scale) and the panchromatic image, multiplied by an injection gain matrix. Equation (1) represents this fusion process:
(1)
wherein,the image from the multispectral image to the full-color image scale after interpolation is represented, and the subscript k represents the kth spectral image; />Is an injection gain matrix, stacked in a multi-dimensional array G, P representing a PAN image; matrix multiplication is point-wise, finally, +.>Is a so-called intensity component, which is a weighted average of all spectral images, defined according to equation (2):
(2)
wherein the weight vectorFor the first row of the forward transform matrix,
in the GS orthogonalization process, intensity components are utilizedAs the first vector of the new orthogonal basis, the MS vectors are then processed one at a time, finding the projection of the MS vector onto a plane or hyperplane defined by the previously found orthogonal vector and its orthogonal components.
Finally, by replacing with histogram matched PTo complete sharpening of full color images. This means that the gray level of the panchromatic image is adjusted to match the intensity component of the interpolated multispectral image. Then, detailed information of the full-color image is injected into the multispectral image, improving the resolution thereof.
Finally, the calculation of the injection gain is described by the following equation (3):
(3)
in general, this fusion process utilizes the high resolution of the panchromatic image, fuses it into the multispectral image to produce a multispectral image with higher resolution, and retains the original spectral information.
Further, in S600, the method includes the following steps:
s610, firstly determining the radius of a square window;
s620, calculating 8 side window separation kernels according to the radius, wherein the 8 side window separation kernels are respectively left L, right R, upper U, lower D, northwest NW, northeast NE, southwest SW and southeast SE side windows;
s630, calculating projection distances of all side windows according to calculation results of 8 side window separation kernels;
s640, obtaining the minimum signed distance of 8 side windows, and taking the minimum signed distance as the output of the current window;
and S650, finally, taking the sliding window as a unit, sequentially moving the window to perform side window processing on the MS image, applying a side window filter to perform weighted average on the MS wave band, and comparing the results.
Specifically, S610 is a first step of performing image processing, involving determining the size of a square window for image analysis. The radius of the window determines the extent of the pixel area considered when performing the side window process. S620, calculating side window separation kernels in 8 directions according to the determined radius. These directions include left (L), right (R), up (U), down (D), north-west (NW), north-east (NE), south-west (SW), and south-east (SE). Each side window kernel represents a distance or connectivity from the current pixel point to a specified direction. S630, calculating the projection distance of each side window by using the 8 side window kernels calculated in the previous step. These distances can be considered as distances from the center pixel to the edges of the respective directional window. S640 calculates the minimum signed distance for the projection distance calculated for each side window. This minimum represents the most significant feature direction and distance within the current window. S650 moves the window step by step in a sliding window manner, and applies a side window filter to each pixel of the MS image. This process involves weighted averaging of the MS bands and comparing the results for the different windows and directions.
The aim of this process is to analyze and process image data, in particular multispectral images, in order to better identify and emphasize specific features in the images. By calculating the projection distances in different directions and selecting the smallest signed distance, important features in the image can be effectively enhanced while suppressing noise and irrelevant information.
A storage medium having stored thereon a computer program which, when executed by a processor, implements a full color sharpening method of multispectral and full color image fusion based on side window filtering as described above.
In particular, the storage medium stores a computer program that performs such a full color sharpening method. When the program is executed by the processor, the program can automatically process the input multispectral and full-color images, and a side window filtering technology is applied to realize efficient image fusion. The technology has wide application range, and is particularly suitable for remote sensing image processing requiring high spatial resolution and abundant spectrum information, such as fields of land coverage classification, environment monitoring, city planning and the like. Compared with the traditional full-color sharpening method, the method based on the side window filtering can more effectively keep the natural appearance and spectrum consistency of the image, and meanwhile improves the spatial resolution and detail performance of the image, so that a higher-quality image fusion result is provided.
A computer device, comprising: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the full-color sharpening method based on the multi-spectrum and full-color image fusion of side window filtering.
In particular, the design of the computer device integrates a memory, a processor and an application specific program, ensuring the high efficiency and stability of the full color sharpening process flow. The integrated design can reduce compatibility problems and improve the operation efficiency of the whole system. The processor is the core part of the execution of the computer program. When full-color sharpening is carried out, the processor can rapidly process a large amount of data, shortens the time of image processing, improves the processing speed, and is particularly suitable for processing large-scale image data sets. Because computer programs are specifically designed to perform full color sharpening methods, they can be highly customized to accommodate different types of multispectral and full-color image data. This means that it can handle images of many different sources and formats, with great versatility and adaptability. The automation degree of the computer program is high, and a large amount of data can be continuously and unattended processed. Such automation not only reduces the likelihood of human error, but also frees human resources so that operators can focus on more important tasks. Since the core program is stored in the memory, the entire system can be easily upgraded by updating the program in the memory. Such a design makes equipment maintenance and upgrades simple, which is beneficial for continuously improving and optimizing algorithms as technology advances. Such computer devices may be widely used in a variety of fields such as remote sensing image processing, weather observation, geographic Information Systems (GIS), environmental monitoring, etc., which generally require high quality image fusion techniques to provide accurate data analysis. In the long term, automated and efficient image processing can significantly reduce human and time costs, especially when processing large amounts of image data.
The following are specific examples:
referring to fig. 1-6, a full-color sharpening method for fusion of multispectral and full-color images based on side window filtering, comprising the steps of:
the method of component replacement class relies on projecting the MS image into a new space where the spatial structure is well separated from the spectral information. The transformed MS image is then sharpened by replacing the components containing the spatial structure with the PAN image. Finally, the sharpening of the MS image is completed by inversely transforming the data, so that the MS image is restored to the original space. The substitution step typically introduces spectral distortion due to low spatial frequency variations of the MS image. To alleviate this problem, a spectral matching process is typically performed before the spatial structure of the MS image is replaced with the PAN image. Notably, the cs-based approach achieves high fidelity in the rendering details of the fusion product. Furthermore, these methods are generally easy to implement and have a low computational burden, and thus can be used in cases where large amounts of data have to be fused.
Under the assumption of linear transformation and single component replacement, the component replacement fusion process can be simplified strongly without explicit calculation of forward and backward transformations. This results in a faster implementation, for eachThis can be generally described by the following equation:
(1)
wherein the subscript k represents the kth spectral image;is an injection gain matrix, typically stacked in a multi-dimensional array G; matrix multiplication is point-by-point. Finally, let(s)>Is a so-called intensity component, defined as
(2)
Wherein the weight vectorIs the first row of the forward transform matrix.
The GS orthogonalization process has been used for full color sharpening, known as GS spectral sharpening. The process uses intensity componentsAs the first vector of the new orthogonal basis. Orthogonalization processes the MS vector one at a time, finding its projection onto the (super) plane defined by the previously found orthogonal vector and its orthogonal components, such that the sum of the orthogonal and projected components equals the zero-mean version of the original vectorized band. By replacing +.>To complete full color sharpening.
Thus, the fusion process is described by (1), in whichThe injection gain is:
(3)
wherein 1 is a full matrix;representing the covariance between the two images X and Y; />Is the variance of X.
Common GS improvement methods are achieved by changing the way IL is generated: the algorithm is improved by changing the parameters of the weight vector. For example by averaging the MS components in the original GS method; improved GSA is performed by minimizing the weighted average MS band relative to the MSE of the low pass filtered version of the PAN image of the original GS method. The context adaptive GSA (CGSA) is obtained by applying GSA to each cluster obtained by k-means clustering applied to MS images, respectively.
Since the low-pass filter model typically removes edge detail information when used as an image filtering process. The SWF model capable of remarkably improving the image edge retaining capacity is provided in the prior art, and inspired by the model, the SWF model is applied to a PS process, so that effective and proper detail features can be better extracted from an image, and a foundation is laid for improving the spatial resolution of an MS fusion image and maintaining spectral information after information injection. Meanwhile, the detail information is further increased in consideration of the fact that the fusion image cannot only see the MS image and also in consideration of the relation of the PAN image.
In this way, the invention changes the mode of processing the weight function through low-pass filtering into the mode of SWF filtering, so that effective and proper detail characteristics can be better reserved, and the filtered image is put into the weight function, thereby enhancing the detail information of the intensity component. Through the above strategy, details of image fusion are increased. The new intensity components and injection gain strategy are described in detail below.
The new intensity component strategy is mainly to process the original MS image by an LP filtering mode and replace the original MS image by a new SWF filtering mode, so that more MS image details can be reserved.
The weighting function of the invention processes by weighted average MS bands, with respect to SWF filtered versions of PAN images, MSE minimisation. SWF can not only keep detail information, but also reduce the calculation cost. The disadvantage of low-pass filtering, which filters out detailed information, can be improved well.
SWF described the edges of the target well, using the Gaofen satellite data for filter comparison. As can be seen from fig. 1, the edges of the original image are very obvious, and it is explained that the edge descriptions of the original image are relatively close to those of the PAN image, so that the edge details of the PAN image can be better reflected. However, the LP image of fig. 2 has a serious blurring phenomenon, has a low similarity compared to the PAN image, and can display less edge information. From the SWF filtered image and the original PAN image in fig. 3, it can be seen that the two images have good similarity at the global edges of the images, and especially at relatively insignificant places, can also show edges well, showing details. A comparison of fig. 2 and 3 shows that the SWF filtering diagram in fig. 3 is significantly improved in the extraction of edge information relative to fig. 2.
To verify the effectiveness of the algorithm, visual effects and objective indicators of the algorithm were evaluated. All experiments herein were performed using Intel (R) Core (TM) i5-8300H CPU @ 2.30GHz,16G memory under the Windows10 (64 bit) operating system, with the experimental platform MATLAB R2021A. 6 different quality indicators were selected and the algorithm herein was compared experimentally with 11 different fusion algorithms via the IKONOS image dataset.
To further verify the validity of the model, the algorithms herein and 11 different fusion algorithms were experimentally compared under the same processing platform in the same dataset. The characteristics of the data set used are detailed below.
IKONOS dataset. As shown in fig. 4 and 5, the dataset was obtained by IKONOS sensors operating in the Visible and Near Infrared (VNIR) spectra. The mass spectrum sensor is characterized by four bands [ blue, green, red, near infrared ], and also has PAN channels available. The MS band Ground Sample Interval (GSI) is 4 m ×4 m and the PAN channel is 1 m ×1 m. Thus, the resolution ratio R is equal to 4. The radiation resolution is 11 bits. The original MS image size is 256×256×4 pixels, and the original PAN image size is 1024×1024 pixels.
Table 1 shows the results of IKONOS satellite data comparison. As can be seen from Table 1, the SWGSA method is greatly improved compared with the similar GSA method, and in the GSA method, all index parameters of the SWGSA are excellent. In contrast to other methods, the SWGSA method is inferior to the SFIM method for the D index; for SAM index, SWGSA method is next to SFIM method; for Ds, RQNR, Q, ERGAS index, the SWGSA method is superior to the 11 algorithms EXP, HIS, PCA, MF, SFIM, PWMBF, TV, PNN, PNN-IDX, GSA, C-GSA. Compared with the similar GSA method, the SWGSA method is reduced by 45.45 percent on the D index, is improved by 1.41 percent on the RQNR index, is reduced by 15.85 percent on the SAM index, and is reduced by 25.88 percent on the ERGAS index.
TABLE 1
While the invention has been described in terms of preferred embodiments, it is not intended to be limited thereto, but rather to enable any person skilled in the art to make various changes and modifications without departing from the spirit and scope of the present invention, which is therefore to be limited only by the appended claims.

Claims (6)

1. The full-color sharpening method based on the fusion of the multispectral and the full-color image of the side window filter is characterized by comprising the following steps of:
s100, projecting an MS image into a new space, and simulating and constructing a Pan image with low spatial resolution by calculating fusion weighted values of multispectral images with different weights;
s200, performing GS transformation on the Pan image with the low spatial resolution of the analog structure, and performing GS transformation on a plurality of resampled MS images with the low spatial resolution of the spectrum band under the same scale, wherein the Pan image with the low spatial resolution of the analog structure is used as a first band in the GS transformation for transformation;
s300, adjusting statistics of the Pan image with high spatial resolution to enable the statistics of the Pan image with high spatial resolution to be matched with statistics of a first conversion band group obtained by GS conversion of the Pan image with low spatial resolution of a simulation structure, so that an improved Pan image with high spatial resolution is obtained;
s400, replacing the 1 st conversion wave band obtained by GS conversion with the improved high-spatial resolution Pan image to generate a new conversion wave band group;
s500, carrying out GS inverse transformation on the new transformation waveband group to obtain an enhanced spatial resolution MS image;
s600, introducing an SWF model, replacing an original low-pass filtering mode in the weight function with the SWF filtering mode, and putting the filtered image into the weight function to obtain better edge details;
in S600, the following steps are included:
s610, firstly determining the radius of a square window;
s620, calculating 8 side window separation kernels according to the radius, wherein the 8 side window separation kernels are respectively left L, right R, upper U, lower D, northwest NW, northeast NE, southwest SW and southeast SE side windows;
s630, calculating projection distances of all side windows according to calculation results of 8 side window separation kernels;
s640, obtaining the minimum signed distance of 8 side windows, and taking the minimum signed distance as the output of the current window;
and S650, finally, taking the sliding window as a unit, sequentially moving the window to perform side window processing on the MS image, applying a side window filter to perform weighted average on the MS wave band, and comparing the results.
2. The full-color sharpening method based on side window filtering multispectral and full-color image fusion according to claim 1, comprising the following steps in S100:
s110, applying a transformation algorithm to the multispectral image so as to generate a series of new images, wherein each image represents a specific feature in the original multispectral image;
s120, analyzing each image component obtained through transformation, and identifying a component containing a main spatial structure and a component containing main spectral information;
s130, separating components of the spatial structure and components of the spectrum information.
3. The full-color sharpening method of multi-spectral and full-color image fusion based on side window filtering of claim 2, wherein in S110, the transformation algorithm is PCA or IHS.
4. The full-color sharpening method based on side window filtering multi-spectral and full-color image fusion according to claim 1, wherein in S300, S400 and S500,
the component replacement fusion process is strongly simplified without explicit computation of forward and backward transforms, for eachDescribed by the following equation:
(1)
wherein,the image from the multispectral image to the full-color image scale after interpolation is represented, and the subscript k represents the kth spectral image; />Is an injection gain matrix, stacked in a multi-dimensional array G, P representing a PAN image; matrix multiplication is point-wise, finally, +.>Is a so-called intensity component, defined as
(2)
Wherein the weight vectorFor the first row of the forward transform matrix,
the GS orthogonalization process uses intensity componentsOrthogonalization processes one MS vector at a time as the first vector of the new orthonormal basis, finds the projection of the MS vector on a plane or hyperplane defined by the previously found orthonormal vector and its orthonormal components such that the sum of the orthonormal component and the projected component is equal to the zero-mean version of the original vectorized band by substituting with histogram matched P before performing the inverse transform>To complete full-color sharpening, the fusion process is therefore described by (1), wherein +.>The injection gain is:
(3)
wherein 1 is a full matrix;representing the covariance between the two images X and Y; />Is the variance of X.
5. A storage medium having stored thereon a computer program, which when executed by a processor, implements a full color sharpening method of side window filtering based multi-spectral and full color image fusion according to any one of claims 1-4.
6. A computer device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the full color sharpening method of side window filtering based multi-spectral and full color image fusion of any one of claims 1-4.
CN202311639653.XA 2023-12-04 2023-12-04 Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering Active CN117372276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311639653.XA CN117372276B (en) 2023-12-04 2023-12-04 Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311639653.XA CN117372276B (en) 2023-12-04 2023-12-04 Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering

Publications (2)

Publication Number Publication Date
CN117372276A CN117372276A (en) 2024-01-09
CN117372276B true CN117372276B (en) 2024-03-08

Family

ID=89398727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311639653.XA Active CN117372276B (en) 2023-12-04 2023-12-04 Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering

Country Status (1)

Country Link
CN (1) CN117372276B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785253A (en) * 2018-12-25 2019-05-21 西安交通大学 A kind of panchromatic sharpening post-processing approach based on enhancing back projection
CN109993717A (en) * 2018-11-14 2019-07-09 重庆邮电大学 A kind of remote sensing image fusion method of combination guiding filtering and IHS transformation
CN112802074A (en) * 2021-01-06 2021-05-14 江南大学 Textile flaw detection method based on illumination correction and visual saliency characteristics
CN115330653A (en) * 2022-08-16 2022-11-11 西安电子科技大学 Multi-source image fusion method based on side window filtering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993717A (en) * 2018-11-14 2019-07-09 重庆邮电大学 A kind of remote sensing image fusion method of combination guiding filtering and IHS transformation
CN109785253A (en) * 2018-12-25 2019-05-21 西安交通大学 A kind of panchromatic sharpening post-processing approach based on enhancing back projection
CN112802074A (en) * 2021-01-06 2021-05-14 江南大学 Textile flaw detection method based on illumination correction and visual saliency characteristics
CN115330653A (en) * 2022-08-16 2022-11-11 西安电子科技大学 Multi-source image fusion method based on side window filtering

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Improving Component Substitution Pansharpening Through Multivariate Regression of MS +Pan Data";Bruno Aiazzi等;《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》;第45卷(第10期);第3230-3239页 *
"多波段遥感影像锐化方法研究进展";陶兢喆 等;《光谱学与光谱分析》;第43卷(第10期);第2999-3008页 *
"最小二乘混合像元分解的端元丰度信息提取研究";杨超 等;《测绘科学》;第42卷(第9期);第143-150页 *
"红外和可见光图像配准与融合算法研究及基于FPGA***开发";肖锐;《中国优秀硕士学位论文全文数据库信息科技辑》(第08期);第33-39页 *
"遥感图像信息融合研究";汪婷婷;《中国博士学位论文全文数据库工程科技Ⅱ辑》(第03期);第12-14页 *

Also Published As

Publication number Publication date
CN117372276A (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN110533620B (en) Hyperspectral and full-color image fusion method based on AAE extraction spatial features
CN109859110B (en) Hyperspectral image panchromatic sharpening method based on spectrum dimension control convolutional neural network
CN109727207B (en) Hyperspectral image sharpening method based on spectrum prediction residual convolution neural network
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN108830796B (en) Hyperspectral image super-resolution reconstruction method based on spectral-spatial combination and gradient domain loss
US8699790B2 (en) Method for pan-sharpening panchromatic and multispectral images using wavelet dictionaries
CN102800074B (en) Synthetic aperture radar (SAR) image change detection difference chart generation method based on contourlet transform
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN113222836B (en) Hyperspectral and multispectral remote sensing information fusion method and system
CN107958450B (en) Panchromatic multispectral image fusion method and system based on self-adaptive Gaussian filtering
CN113673590A (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
CN107169946B (en) Image fusion method based on nonnegative sparse matrix and hypersphere color transformation
CN104794681B (en) Remote sensing image fusion method based on more redundant dictionaries and sparse reconstruct
EP3859657A1 (en) Image processing device, image processing method, and program
CN110111276A (en) Based on sky-spectrum information deep exploitation target in hyperspectral remotely sensed image super-resolution method
CN107274441A (en) The wave band calibration method and system of a kind of high spectrum image
CN111340697A (en) Clustering regression-based image super-resolution method
CN114187214A (en) Infrared and visible light image fusion system and method
CN105303542A (en) Gradient weighted-based adaptive SFIM image fusion algorithm
CN116258976A (en) Hierarchical transducer high-resolution remote sensing image semantic segmentation method and system
CN113570536A (en) Panchromatic and multispectral image real-time fusion method based on CPU and GPU cooperative processing
CN115512192A (en) Multispectral and hyperspectral image fusion method based on cross-scale octave convolution network
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN113446998B (en) Hyperspectral target detection data-based dynamic unmixing method
CN109241981B (en) Feature detection method based on sparse coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant