CN113112591A - Multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition - Google Patents
Multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition Download PDFInfo
- Publication number
- CN113112591A CN113112591A CN202110403626.7A CN202110403626A CN113112591A CN 113112591 A CN113112591 A CN 113112591A CN 202110403626 A CN202110403626 A CN 202110403626A CN 113112591 A CN113112591 A CN 113112591A
- Authority
- CN
- China
- Prior art keywords
- image
- time
- tensor
- mode
- dictionary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition, which comprises the following steps of: preprocessing a hyperspectral image HtSe-Lsa with high temporal resolution, high spectral resolution and low spatial resolution; the multi-temporal HtSe-Lsa image and the Hsa-LtSe image are expressed in a three-dimensional tensor form. The invention has the beneficial effects that: the third dimension is expanded by using a plurality of images with long time sequence, a plurality of images with high resolution can be generated at the same time by one-time fusion, and the integration fusion of a plurality of images with different time phases is realized; the multi-dimensional expression advantage of tensor is utilized, and target images of different time phases are overlapped in a spectrum dimension to realize the expansion of the time phases; on one hand, the calculation efficiency is improved through dimensionality reduction, on the other hand, the dimensionality is reduced, error accumulation caused by too many variables needing to be solved due to overhigh dimensionality is reduced, the complexity of a reconstruction model is greatly reduced, and the reconstruction efficiency is improved.
Description
Technical Field
The invention relates to the field of remote sensing image processing, in particular to a multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition.
Background
With the rapid development of remote sensing technology, a satellite sensor can obtain a large number of remote sensing images with different time, space and spectral resolutions every day. The time sequence remote sensing image with high spatial and high spectral resolution can provide huge possibility for monitoring and researching the rapid change of the earth surface. However, due to the limitation of an imaging system of the satellite sensor, the acquired images are mutually restricted in terms of time resolution, spatial resolution and spectral resolution, and no satellite sensor in the world can acquire remote sensing images with high spatial resolution, high spectral resolution and high time resolution at the same time.
The fusion of the remote sensing images is realized by integrating multi-source remote sensing data which are complementary in time, space and spectrum according to a certain rule (or algorithm) so as to obtain more accurate and richer information than any single data. The existing remote sensing image fusion method can be divided into multi-view space fusion, space-spectrum fusion, time-space fusion and the like based on different purposes. Multi-view spatial fusion generates higher spatial resolution images by processing multi-view (multi-temporal, multi-angle) remote sensing images with sub-pixel uniqueness. The space-spectrum fusion utilizes the space-spectrum complementary characteristic among multi-source image data to generate the remote sensing image with high spatial resolution and high spectral resolution simultaneously. The method mainly comprises panchromatic/multispectral image fusion and panchromatic (or multispectral)/hyperspectral image fusion. And time-space fusion utilizes the time-space complementary characteristic among multi-source image data to generate a high-spatial-resolution remote sensing image with continuous time. The method mainly comprises a fusion method based on single-phase image pair assistance and a fusion method based on multi-phase image pair assistance.
On one hand, most of the existing fusion methods are only applied to two of integration time, space and spectrum relations, and fusion images with high time, space and high spectral resolution cannot be obtained simultaneously; on the other hand, these fusion methods can generally only be used to generate a single-phase high-resolution fusion image, and cannot utilize multiple closely-timed low-spatial-resolution/spectral-resolution images to be fused with sparse-timed high-spatial-resolution/spectral-resolution images to generate multiple high-resolution images at a time.
The remote sensing image with high time, high space and high spectral resolution has important significance for the application of multiple fields such as fine monitoring of natural resources, fine agriculture and the like, but due to the restriction of a sensor imaging system, a single sensor cannot obtain the image with high time, high space and high spectral resolution. The time-space-spectrum fusion can integrate the complementary advantages among multi-source images to generate a high time-space-spectrum resolution image, however, the existing time-space-spectrum fusion method only outputs the high time-spectrum resolution image at a specific moment at a time, and the time sequence relation among the images is not considered enough.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition.
The space-time spectrum integrated fusion method of the multi-temporal remote sensing image based on the coupled sparse tensor decomposition comprises the following steps of:
step 1, preprocessing a hyperspectral HtSe-Lsa image with high temporal resolution, high spectral resolution and low spatial resolution, and preprocessing an Hsa-LtSe image with high spatial resolution, low temporal resolution and low spectral resolution;
step 2, expressing the multi-temporal HtSe-Lsa image and the Hsa-LtSe image into a three-dimensional tensor form by utilizing the multi-dimensional expression advantage of the tensor; taking the image wave bands which are different in phase as image wave bands with time change information, and superposing and expanding the remote sensing images of the same sensor at different time phases in a spectral dimension to be used as a third dimension;
step 3, constructing a time-space-spectrum tensor relation model of coupling between the target image and the input HtSe-Lsa image and Hsa-LtSe image;
step 4, initializing and solving dictionaries with different dimensions and structure tensors: establishing a time-space-spectrum integrated fusion model based on a variation framework;
and 5, optimizing the dictionaries with different dimensions and the structure tensor to obtain and reconstruct a target image.
Preferably, step 3 specifically comprises the following steps:
step 3.1, utilizing a Tucker decomposition model to separate the target images of multiple time phasesA tensor described as three dimensions, said three dimensional tensor represented as:
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,is a target image;is a target imageThe structure tensor of (a) is,wherein n ishNumber of atoms of high-modulus dictionary, nwNumber of atoms, n, of wide-mode dictionarystIs the number of atoms in the time-spectrum mode dictionary;is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary; the high mode and the wide mode represent the target imageThe time-spectrum mode represents the target imageThe time and spectral dimensional information of the optical fiber,the element in the relation model is a relation coefficient between three dictionaries in the time-space-spectrum tensor relation model;
step 3.2, inputting the HtSe-Lsa image with high time resolution, high spectral resolution and low spatial resolutionAs a target imageSpatially degraded image, the degraded imageFrom the target imageThe high-mode and wide-mode action point spread functions and the down-sampling matrix are obtained:
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,is a target image; p1∈Rw×W,P2∈Rh×HFor spatial down-sampling operators along the broad and high modes, respectively, the targetImage forming methodSpatially degraded imageryRepresenting the degradation of the spatial resolution of the fused image and the observation image;
step 3.3, assuming degraded imageUnder the condition of respectively acting on the wide mode and the high mode, combining the formula (1) and the formula (2) to form the imageWrite as:
in the above formula, the first and second carbon atoms are,down-sampling dictionaries with wide and high modes respectively, and multiplier notation with numbers in the backnA modulo-n product representing the tensor and matrix;
step 3.4, inputting Hsa-LtSe images with high spatial resolution, low temporal resolution and low spectral resolutionAs a target imageDegraded image in time and spectral dimensions, the degraded imageFrom the target imageIn time and spectrumThe dimension acts on the down-sampling operator to obtain:
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix, P3∈Rs×S,P4∈Rt×TRespectively representing the down-sampling operators on the spectral dimension and the time dimension, representing the degradation of the fused image and the observed image on the time-spectral dimension, combining the formula (1) and the formula (4), and converting the HR-MS image into the HR-MS imageWrite as:
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,representing a down-sampled dictionary in the time-spectral dimension.
Preferably, step 4 specifically comprises:
in the above formula, | · the luminance | |0And · aFRespectively represent l0Norm and F norm, N is structure tensorThe maximum number of non-zero elements in (a);is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary;is a target imageSpatially degraded images;is a target imageThe structure tensor of (1), the multiplier with the number behindnA modulo-n product representing the tensor and matrix;down-sampling dictionaries of wide mode and high mode respectively;a down-sampling dictionary representing a time-spectrum dimension;
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwWide-mode dictionary of individual atoms;Is represented by having a number nstAn atomic time-spectral mode dictionary;a degraded image of the target image chi on the space is obtained;is a target imageThe structure tensor of (a);down-sampling dictionaries of wide mode and high mode respectively;a down-sampling dictionary representing a time-spectrum dimension; λ is sparse regularization parameter, | ·| non-woven phosphor1Is composed of1And (4) norm.
Preferably, step 5 specifically comprises the following steps:
step 5.1, initializing wide mode and high mode of the image based on a dictionary updating cycle-singular value decomposition algorithm, initializing time-spectrum mode based on a simplest simplex segmentation and augmentation Lagrange algorithm, and initializing structure tensor based on an alternate direction multiplier algorithmInitialization is performed by solving using a near-end iterative optimization (PAO) algorithmW, H, ST, guaranteeW, H and ST converge and find out key points; the ratio of W, H, ST,the iterative update of (c) is as follows:
in the above formula, the first and second carbon atoms are,given by the objective function (7), β is the near-end weight coefficient, β>0; the parameter with pre subscript represents the value solved in the last iteration;is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary;is a target imageThe structure tensor of (a); i | · | purple windFRepresents the F norm;
and 5.2, optimizing the dictionary and the structure tensor: updating and optimizing dictionaries W, H and ST based on a conjugate gradient algorithm, and carrying out structure tensor based on an alternative direction multiplier algorithmCarrying out updating optimization solution; for the solved optimal W, H, ST dictionary and structure tensorUsing formulas(1) Estimating to obtain a target image
Preferably, the pretreatment means in step 1 comprises: radiometric calibration, atmospheric correction, ortho correction, and image registration.
Preferably, in step 2, the data of the Hsa-LtSe image-missing time phase is filled with 0.
The invention has the beneficial effects that:
(1) according to the invention, the time dimension is overlapped on the spectrum, so that a third dimension can be expanded by using a plurality of long time sequence images, a plurality of high-resolution images can be generated at the same time by one-time fusion, and the integrated fusion of a plurality of images in different time phases is realized.
(2) According to the invention, by utilizing the multi-dimensional expression advantage of tensor, target images of different time phases are overlapped in a spectrum dimension to realize the expansion of the time phases, the time phases are expressed as a three-dimensional tensor form, and the time-space-spectrum fusion problem of a plurality of four-dimensional time phases is converted into a three-dimensional fusion problem, so that on one hand, the calculation efficiency is improved by reducing the dimension, on the other hand, the dimension is reduced, the error accumulation caused by too many variables needing to be solved due to overhigh dimension is reduced, the complexity of a reconstruction model is greatly reduced, and the reconstruction efficiency is improved.
(3) The invention establishes an integrated multi-temporal multi-resolution sensor image coupling relation model. The observed image is considered to be a degraded image of the fusion target image on a time dimension, a spectrum or a space, different degradation functions can be set according to the characteristics of the sensor, and the method is suitable for the fusion of different types of sensor images with different resolutions, such as time-space fusion and space-spectrum fusion, and the fusion of time-space-spectrum, and has strong universality.
(4) The invention introduces sparse prior to the target image, namely the structure tensor is sparse, and the high autocorrelation on the image space and the high redundancy on the spectrum and the adjacent time phase are utilized, so that the high-quality time-space-spectrum fusion image can be obtained.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the following examples. The following examples are set forth merely to aid in the understanding of the invention. It should be noted that, for a person skilled in the art, several modifications can be made to the invention without departing from the principle of the invention, and these modifications and modifications also fall within the protection scope of the claims of the present invention.
The invention utilizes the multidimensional expression advantage of tensor to establish the time, space and spectrum relations among remote sensing images with different space-time spectral resolutions, simultaneously fully considers the space, spectrum and time autocorrelation on the high time-space-spectral resolution image space, has low dimensionality on the spectrum and high redundancy between adjacent time phases, introduces sparse prior to a target image as a regularization item to establish an optimal fusion model, realizes time sequence fusion image integrated output by iterative optimization solution, obtains a high time-space-spectral resolution fusion image with high space-time spectral information fidelity, and realizes the space-time spectrum integrated fusion of the long time sequence remote sensing image. The invention can adopt computer software to realize automatic operation flow.
As an embodiment, a flowchart is shown in fig. 1, and specifically includes the following steps:
step 1, preprocessing a hyperspectral HtSe-Lsa image with high temporal resolution, high spectral resolution and low spatial resolution, and preprocessing an Hsa-LtSe image with high spatial resolution, low temporal resolution and low spectral resolution; the pre-processing includes radiometric calibration, atmospheric correction, orthometric correction, and image registration.
Step 2, expressing the multi-temporal HtSe-Lsa image and the Hsa-LtSe image into a three-dimensional tensor form by utilizing the multi-dimensional expression advantage of the tensor; the remote sensing images of different time phases of the same scene generally have great redundancy, the redundancy is very similar to the low-dimensional property of a single image among different wave bands, the image wave bands among different phases are regarded as image wave bands with time change information, the remote sensing images of the same sensor in different time phases are overlapped and expanded in the spectral dimension to be used as a third dimension, and the data of the Hsa-LtSe image in the missing time phase are filled by using 0;
and 3, decomposing the HtSe-Lsa and Hsa-LtSe data in a tensor form into a structure tensor and three mode matrixes (high mode, wide mode and time-spectrum mode) by using Tucker decomposition. HtSe-Lsa is considered as a degraded image of a target image in space and is obtained by an objective function action point diffusion function and a space down-sampling matrix, HR-MS is a degraded function of the target image in spectrum and time, the spectrum degraded function is obtained by solving a sensor spectrum response function, and the time degraded function is obtained by time phase correlation; constructing a time-space-spectrum tensor relation model of coupling between a target image and an input HtSe-Lsa image and Hsa-LtSe image;
step 3.1, utilizing a Tucker decomposition model to separate the target images of multiple time phasesA tensor described as three dimensions, said three dimensional tensor represented as a structure tensor multiplied by a broad-mode, high-mode, time-spectral-mode dictionary:
in the above formula, the first and second carbon atoms are,is a target image;is the structure tensor of the target image x,wherein n ishNumber of atoms of high-modulus dictionary, nwNumber of atoms, n, of wide-mode dictionarystIs the number of atoms in the time-spectrum mode dictionary;is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary; the high mode and the wide mode represent the target imageThe time-spectrum mode represents the target imageThe time and spectral dimensional information of the optical fiber,the element in the relation model is a relation coefficient between three dictionaries in the time-space-spectrum tensor relation model;
step 3.2, inputting the HtSe-Lsa image with high time resolution, high spectral resolution and low spatial resolutionAs a target imageSpatially degraded image, the degraded imageFrom the target imageThe high and wide mode point of action spread functions (PSFs) and the downsampled matrix are obtained:
in the above formula, the first and second carbon atoms are,is a target image; p1∈Rw×W,P2∈Rh×HFor spatial down-sampling operators along the wide and high modes, respectively, the target imageSpatially degraded imageryRepresenting the degradation of the spatial resolution of the fused image and the observation image;
step 3.3, assuming degraded imageUnder the condition of respectively acting on the wide mode and the high mode, combining the formula (1) and the formula (2) to form the imageWrite as:
in the above formula, the first and second carbon atoms are,down-sampling dictionaries of wide mode and high mode respectively; for Point Spread Functions (PSF), the separability assumption is valid, so that many benefits are brought in the calculation of the tensor;
step 3.4, inputting Hsa-LtSe images with high spatial resolution, low temporal resolution and low spectral resolutionAs a target imageDegraded image in time and spectral dimensions, the degraded imageFrom the target imageApplying a down-sampling operator in the time and spectral dimensions to obtain:
in the above formula, P3∈Rs×S,P4∈Rt×TRespectively representing the down-sampling operators on the spectral dimension and the time dimension, representing the degradation of the fused image and the observed image on the time-spectral dimension, combining the formula (1) and the formula (4), and converting the HR-MS image into the HR-MS imageWrite as:
in the above formula, the first and second carbon atoms are,a down-sampling dictionary representing a time-spectrum dimension; imaging a target with time-space-spectral resolutionAnd the observed imageAndthe relation in space and time-spectrum dimensions is comprehensively expressed in a tensor form, and coupled time-space-spectrum tensor is constructedQuantitative relation model, object imageBy imageAndthe dictionary (W, H, ST) and its corresponding structure tensor are solved based on the coupling relation modelCarrying out reconstruction;
step 4, initializing and solving dictionaries with different dimensions and structure tensors: taking the solution of the fusion image as a ill-conditioned inversion problem, introducing sparse prior to the structure tensor based on a variation framework, and establishing a time-space-spectrum integrated fusion model;
in the above formula, | · the luminance | |0And | · | non-conducting phosphorFRespectively represent l0Norm and F norm, N is structure tensorThe maximum number of non-zero elements in (a);is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary;a degraded image of the target image chi on the space is obtained;is a target imageThe structure tensor of (a);are respectively a wide mode
And a high-modulus downsampling dictionary;a down-sampling dictionary representing a time-spectrum dimension; because the HtSe-Lsa image and the Hsa-LtSe image are respectively subjected to down-sampling in a space dimension and a time-spectrum dimension, the fusion problem is a morbid inversion problem, and some prior information of a target image needs to be introduced to regularize the fusion problem; since the target image has strong sparsity in both the spectral and temporal dimensions and self-similarity in space, the target image is considered to have sparsity in both the spatial and temporal-spectral dimensions, in tensor decompositionAllowing sparse representation on all three dictionaries,
in the above formula, the first and second carbon atoms are,is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary;is a target imageSpatially degraded images;is a target imageThe structure tensor of (a);down-sampling dictionaries of wide mode and high mode respectively;a down-sampling dictionary representing a time-spectrum dimension; λ is sparse regularization parameter, | ·| non-woven phosphor1Is 11A norm;
step 5, optimizing dictionaries in different dimensions and structure tensors to obtain and reconstruct a target image; using the solved optimal W, H, ST dictionary and structure tensorAnd obtaining a target image.
Step 5.1, initializing a wide mode (W) and a high mode (H) of the image based on a dictionary updating cycle-singular value decomposition (DUC-KSVD) algorithm, initializing a time-spectrum mode based on a simplest pure form segmentation augmentation Lagrange (SISAL) algorithm, and initializing a structure tensor based on an alternating direction multiplier Algorithm (ADMM)Initialising, solving since the objective function (7) is non-convexW, H, ST are not unique, but when the other variables are fixed, the objective function is convex for a single variable; thus solving by using a near-end iterative optimization (PAO) algorithmW, H, ST, guaranteeConverging W, H and ST under a certain condition and finding out key points; the ratio of W, H, ST,the iterative update of (c) is as follows:
in the above formula, the first and second carbon atoms are,given by the objective function (7), β is the near-end weight coefficient, β>0; the parameter with pre subscript represents the value solved in the last iteration;is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary;is a target imageThe structure tensor of (a); i | · | purple windFRepresents the F norm;
and 5.2, optimizing the dictionary and the structure tensor: updating and optimizing dictionaries W, H and ST based on a Conjugate Gradient (CG) algorithm, and carrying out structure tensor based on an alternating direction multiplier Algorithm (ADMM)Carrying out updating optimization solution; for the solved optimal W, H, ST dictionary and structure tensorEstimating and obtaining the target image by using the formula (1)
Claims (6)
1. A multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition is characterized by comprising the following steps:
step 1, preprocessing a hyperspectral HtSe-Lsa image with high temporal resolution, high spectral resolution and low spatial resolution, and preprocessing an Hsa-LtSe image with high spatial resolution, low temporal resolution and low spectral resolution;
step 2, expressing the multi-temporal HtSe-Lsa image and the Hsa-LtSe image into a three-dimensional tensor form by utilizing the multi-dimensional expression advantage of the tensor; taking the image wave bands which are different in phase as image wave bands with time change information, and superposing and expanding the remote sensing images of the same sensor at different time phases in a spectral dimension to be used as a third dimension;
step 3, constructing a time-space-spectrum tensor relation model of coupling between the target image and the input HtSe-Lsa image and Hsa-LtSe image;
step 4, initializing and solving dictionaries with different dimensions and structure tensors: establishing a time-space-spectrum integrated fusion model based on a variation framework;
and 5, optimizing the dictionaries with different dimensions and the structure tensor to obtain and reconstruct a target image.
2. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition as recited in claim 1, wherein the step 3 specifically comprises the following steps:
step 3.1, utilizing a Tucker decomposition model to separate the target images of multiple time phasesA tensor described as three dimensions, said three dimensional tensor represented as:
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,is a target image;is a target imageThe structure tensor of (a) is,wherein n ishNumber of atoms of high-modulus dictionary, nwNumber of atoms, n, of wide-mode dictionarystIs the number of atoms in the time-spectrum mode dictionary;is represented by having a number nhOf atomsA high modulus dictionary;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary; the high mode and the wide mode represent the target imageThe time-spectrum mode represents the target imageThe time and spectral dimensional information of the optical fiber,the element in the relation model is a relation coefficient between three dictionaries in the time-space-spectrum tensor relation model;
step 3.2, inputting the HtSe-Lsa image with high time resolution, high spectral resolution and low spatial resolutionAs a target imageSpatially degraded image, the degraded imageFrom the target imageThe high-mode and wide-mode action point spread functions and the down-sampling matrix are obtained:
in the above formula, the first and second carbon atoms are,is a target image; multiplier circuit with numerals in backnRepresenting the modulo-n product of the tensor and matrix, P1∈Rw×W,P2∈Rh×HFor spatial down-sampling operators along the wide and high modes, respectively, the target imageSpatially degraded imageryRepresenting the degradation of the spatial resolution of the fused image and the observation image;
step 3.3, assuming degraded imageUnder the condition of respectively acting on the wide mode and the high mode, combining the formula (1) and the formula (2) to form the imageWrite as:
in the above formula, the first and second carbon atoms are,down-sampling dictionaries with wide and high modes respectively, and multiplier notation with numbers in the backnA modulo-n product representing the tensor and matrix;
step 3.4, inputting Hsa-LtSe images with high spatial resolution, low temporal resolution and low spectral resolutionAs a target imageDegraded image in time and spectral dimensions, the degraded imageFrom the target imageApplying a down-sampling operator in the time and spectral dimensions to obtain:
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix, P3∈Rs×S,P4∈Rt×TRespectively representing the down-sampling operators on the spectral dimension and the time dimension, representing the degradation of the fused image and the observed image on the time-spectral dimension, combining the formula (1) and the formula (4), and converting the HR-MS image into the HR-MS imageWrite as:
3. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition as recited in claim 1, wherein the step 4 specifically comprises the following steps:
in the above formula, | · the luminance | |0And | · | non-conducting phosphor0And | · | non-conducting phosphorFRespectively represent l0Norm and F norm, N is structure tensorThe maximum number of non-zero elements in (a);is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary;is a target imageSpatially degraded images;is a target imageThe structure tensor of (a); multiplier circuit with numerals in backnRepresenting the modulo-n product of the tensor and matrix,down-sampling dictionaries of wide mode and high mode respectively;a down-sampling dictionary representing a time-spectrum dimension;
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary;is a target imageSpatially degraded images;is a target imageThe structure tensor of (a);down-sampling dictionaries of wide mode and high mode respectively;a down-sampling dictionary representing a time-spectrum dimension; λ is sparse regularization parameter, | ·| non-woven phosphor1Is 11And (4) norm.
4. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition as recited in claim 2, wherein the step 5 specifically comprises the following steps:
step 5.1, initializing wide mode and high mode of the image based on a dictionary updating cycle-singular value decomposition algorithm, initializing time-spectrum mode based on a simplest simplex segmentation and augmentation Lagrange algorithm, and initializing structure tensor based on an alternate direction multiplier algorithmInitialization is performed by solving using a near-end iterative optimization (PAO) algorithmW, H, ST, guaranteeW, H and ST converge and find out key points; the ratio of W, H, ST,the iterative update of (c) is as follows:
in the above formula, the first and second carbon atoms are,given by the objective function (7), β is nearEnd weight coefficient, beta>0; the parameter with pre subscript represents the value solved in the last iteration;is represented by having a number nhA high modulus dictionary of atoms;is represented by having nwA wide-mode dictionary of atoms;is represented by having a number nstAn atomic time-spectral mode dictionary;is a target imageThe structure tensor of (a); i | · | purple windFRepresents the F norm;
and 5.2, optimizing the dictionary and the structure tensor: updating and optimizing dictionaries W, H and ST based on a conjugate gradient algorithm, and carrying out structure tensor based on an alternative direction multiplier algorithmCarrying out updating optimization solution; for the solved optimal W, H, ST dictionary and structure tensorEstimating and obtaining the target image by using the formula (1)
5. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition is characterized in that the preprocessing means in the step 1 comprises the following steps: radiometric calibration, atmospheric correction, ortho correction, and image registration.
6. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition is characterized in that: in step 2, the data of the Hsa-LtSe image missing phase is filled with 0.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110403626.7A CN113112591B (en) | 2021-04-15 | 2021-04-15 | Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110403626.7A CN113112591B (en) | 2021-04-15 | 2021-04-15 | Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113112591A true CN113112591A (en) | 2021-07-13 |
CN113112591B CN113112591B (en) | 2022-08-26 |
Family
ID=76717048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110403626.7A Active CN113112591B (en) | 2021-04-15 | 2021-04-15 | Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113112591B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113989406A (en) * | 2021-12-28 | 2022-01-28 | 成都理工大学 | Tomography gamma scanning image reconstruction method based on sparse tensor dictionary learning |
CN115346004A (en) * | 2022-10-18 | 2022-11-15 | 深圳市规划和自然资源数据管理中心(深圳市空间地理信息中心) | Remote sensing time sequence data reconstruction method combining space-time reconstruction and CUDA acceleration |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140064554A1 (en) * | 2011-11-14 | 2014-03-06 | San Diego State University Research Foundation | Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery |
US20140301659A1 (en) * | 2013-04-07 | 2014-10-09 | Bo Li | Panchromatic Sharpening Method of Spectral Image Based on Fusion of Overall Structural Information and Spatial Detail Information |
CN106651820A (en) * | 2016-09-23 | 2017-05-10 | 西安电子科技大学 | Sparse tensor neighborhood embedding-based remote sensing image fusion method |
CN106780345A (en) * | 2017-01-18 | 2017-05-31 | 西北工业大学 | Based on the hyperspectral image super-resolution reconstruction method that coupling dictionary and space conversion are estimated |
CN107977951A (en) * | 2017-12-25 | 2018-05-01 | 咸阳师范学院 | The multispectral and hyperspectral image fusion method decomposed based on Coupling Tensor |
CN111340080A (en) * | 2020-02-19 | 2020-06-26 | 济南大学 | High-resolution remote sensing image fusion method and system based on complementary convolution characteristics |
-
2021
- 2021-04-15 CN CN202110403626.7A patent/CN113112591B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140064554A1 (en) * | 2011-11-14 | 2014-03-06 | San Diego State University Research Foundation | Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery |
US20140301659A1 (en) * | 2013-04-07 | 2014-10-09 | Bo Li | Panchromatic Sharpening Method of Spectral Image Based on Fusion of Overall Structural Information and Spatial Detail Information |
CN106651820A (en) * | 2016-09-23 | 2017-05-10 | 西安电子科技大学 | Sparse tensor neighborhood embedding-based remote sensing image fusion method |
CN106780345A (en) * | 2017-01-18 | 2017-05-31 | 西北工业大学 | Based on the hyperspectral image super-resolution reconstruction method that coupling dictionary and space conversion are estimated |
CN107977951A (en) * | 2017-12-25 | 2018-05-01 | 咸阳师范学院 | The multispectral and hyperspectral image fusion method decomposed based on Coupling Tensor |
CN111340080A (en) * | 2020-02-19 | 2020-06-26 | 济南大学 | High-resolution remote sensing image fusion method and system based on complementary convolution characteristics |
Non-Patent Citations (2)
Title |
---|
HUANFENG SHEN 等: "An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING ( VOLUME: 54, ISSUE: 12, DEC. 2016)》 * |
孟祥超 等: "基于多分辨率分析的GF-5和GF-1遥感影像空—谱融合", 《遥感学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113989406A (en) * | 2021-12-28 | 2022-01-28 | 成都理工大学 | Tomography gamma scanning image reconstruction method based on sparse tensor dictionary learning |
CN113989406B (en) * | 2021-12-28 | 2022-04-01 | 成都理工大学 | Tomography gamma scanning image reconstruction method based on sparse tensor dictionary learning |
CN115346004A (en) * | 2022-10-18 | 2022-11-15 | 深圳市规划和自然资源数据管理中心(深圳市空间地理信息中心) | Remote sensing time sequence data reconstruction method combining space-time reconstruction and CUDA acceleration |
Also Published As
Publication number | Publication date |
---|---|
CN113112591B (en) | 2022-08-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cheng et al. | Sparse representation based pansharpening using trained dictionary | |
CN114119444B (en) | Multi-source remote sensing image fusion method based on deep neural network | |
CN113112591B (en) | Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition | |
CN108520495B (en) | Hyperspectral image super-resolution reconstruction method based on clustering manifold prior | |
Sdraka et al. | Deep learning for downscaling remote sensing images: Fusion and super-resolution | |
CN114998167B (en) | High-spectrum and multi-spectrum image fusion method based on space-spectrum combined low rank | |
Liu et al. | A practical pan-sharpening method with wavelet transform and sparse representation | |
Aghamaleki et al. | Image fusion using dual tree discrete wavelet transform and weights optimization | |
CN117252761A (en) | Cross-sensor remote sensing image super-resolution enhancement method | |
CN115496662A (en) | High-order tensor spectral image super-resolution reconstruction method based on spectral information fusion | |
Lu et al. | Hyper-sharpening based on spectral modulation | |
Wen et al. | A novel spatial fidelity with learnable nonlinear mapping for panchromatic sharpening | |
Mei et al. | Lightweight multiresolution feature fusion network for spectral super-resolution | |
Xu et al. | Degradation-aware dynamic fourier-based network for spectral compressive imaging | |
Shi et al. | A pansharpening method based on hybrid-scale estimation of injection gains | |
CN109785281B (en) | Spectrum mapping based gray level amplitude modulation panning method | |
CN114140359B (en) | Remote sensing image fusion sharpening method based on progressive cross-scale neural network | |
Divekar et al. | Image fusion by compressive sensing | |
Cui et al. | Meta-TR: meta-attention spatial compressive imaging network with swin transformer | |
Zhang et al. | Considering nonoverlapped bands construction: A general dictionary learning framework for hyperspectral and multispectral image fusion | |
Yuan et al. | Hyperspectral and multispectral image fusion using non-convex relaxation low rank and total variation regularization | |
Rostami et al. | Hyperspectral image super-resolution via learning an undercomplete dictionary and intra-algorithmic postprocessing | |
Zheng et al. | An unsupervised hyperspectral image fusion method based on spectral unmixing and deep learning | |
Xie et al. | Two-stage fusion based CNN for hyperspectral pansharpening | |
Lei et al. | An interpretable deep neural network for panchromatic and multispectral image fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Sun Weiwei Inventor after: Zhou Jun Inventor after: Meng Xiangchao Inventor after: Yang Gang Inventor before: Sun Weiwei Inventor before: Zhou Jun Inventor before: Meng Xiangchao Inventor before: Yang Gang |
|
GR01 | Patent grant | ||
GR01 | Patent grant |