CN113112591A - Multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition - Google Patents

Multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition Download PDF

Info

Publication number
CN113112591A
CN113112591A CN202110403626.7A CN202110403626A CN113112591A CN 113112591 A CN113112591 A CN 113112591A CN 202110403626 A CN202110403626 A CN 202110403626A CN 113112591 A CN113112591 A CN 113112591A
Authority
CN
China
Prior art keywords
image
time
tensor
mode
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110403626.7A
Other languages
Chinese (zh)
Other versions
CN113112591B (en
Inventor
孙伟伟
周俊
孟祥超
杨刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Yongju Space Information Technology Co ltd
Original Assignee
Ningbo Yongju Space Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Yongju Space Information Technology Co ltd filed Critical Ningbo Yongju Space Information Technology Co ltd
Priority to CN202110403626.7A priority Critical patent/CN113112591B/en
Publication of CN113112591A publication Critical patent/CN113112591A/en
Application granted granted Critical
Publication of CN113112591B publication Critical patent/CN113112591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition, which comprises the following steps of: preprocessing a hyperspectral image HtSe-Lsa with high temporal resolution, high spectral resolution and low spatial resolution; the multi-temporal HtSe-Lsa image and the Hsa-LtSe image are expressed in a three-dimensional tensor form. The invention has the beneficial effects that: the third dimension is expanded by using a plurality of images with long time sequence, a plurality of images with high resolution can be generated at the same time by one-time fusion, and the integration fusion of a plurality of images with different time phases is realized; the multi-dimensional expression advantage of tensor is utilized, and target images of different time phases are overlapped in a spectrum dimension to realize the expansion of the time phases; on one hand, the calculation efficiency is improved through dimensionality reduction, on the other hand, the dimensionality is reduced, error accumulation caused by too many variables needing to be solved due to overhigh dimensionality is reduced, the complexity of a reconstruction model is greatly reduced, and the reconstruction efficiency is improved.

Description

Multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition
Technical Field
The invention relates to the field of remote sensing image processing, in particular to a multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition.
Background
With the rapid development of remote sensing technology, a satellite sensor can obtain a large number of remote sensing images with different time, space and spectral resolutions every day. The time sequence remote sensing image with high spatial and high spectral resolution can provide huge possibility for monitoring and researching the rapid change of the earth surface. However, due to the limitation of an imaging system of the satellite sensor, the acquired images are mutually restricted in terms of time resolution, spatial resolution and spectral resolution, and no satellite sensor in the world can acquire remote sensing images with high spatial resolution, high spectral resolution and high time resolution at the same time.
The fusion of the remote sensing images is realized by integrating multi-source remote sensing data which are complementary in time, space and spectrum according to a certain rule (or algorithm) so as to obtain more accurate and richer information than any single data. The existing remote sensing image fusion method can be divided into multi-view space fusion, space-spectrum fusion, time-space fusion and the like based on different purposes. Multi-view spatial fusion generates higher spatial resolution images by processing multi-view (multi-temporal, multi-angle) remote sensing images with sub-pixel uniqueness. The space-spectrum fusion utilizes the space-spectrum complementary characteristic among multi-source image data to generate the remote sensing image with high spatial resolution and high spectral resolution simultaneously. The method mainly comprises panchromatic/multispectral image fusion and panchromatic (or multispectral)/hyperspectral image fusion. And time-space fusion utilizes the time-space complementary characteristic among multi-source image data to generate a high-spatial-resolution remote sensing image with continuous time. The method mainly comprises a fusion method based on single-phase image pair assistance and a fusion method based on multi-phase image pair assistance.
On one hand, most of the existing fusion methods are only applied to two of integration time, space and spectrum relations, and fusion images with high time, space and high spectral resolution cannot be obtained simultaneously; on the other hand, these fusion methods can generally only be used to generate a single-phase high-resolution fusion image, and cannot utilize multiple closely-timed low-spatial-resolution/spectral-resolution images to be fused with sparse-timed high-spatial-resolution/spectral-resolution images to generate multiple high-resolution images at a time.
The remote sensing image with high time, high space and high spectral resolution has important significance for the application of multiple fields such as fine monitoring of natural resources, fine agriculture and the like, but due to the restriction of a sensor imaging system, a single sensor cannot obtain the image with high time, high space and high spectral resolution. The time-space-spectrum fusion can integrate the complementary advantages among multi-source images to generate a high time-space-spectrum resolution image, however, the existing time-space-spectrum fusion method only outputs the high time-spectrum resolution image at a specific moment at a time, and the time sequence relation among the images is not considered enough.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition.
The space-time spectrum integrated fusion method of the multi-temporal remote sensing image based on the coupled sparse tensor decomposition comprises the following steps of:
step 1, preprocessing a hyperspectral HtSe-Lsa image with high temporal resolution, high spectral resolution and low spatial resolution, and preprocessing an Hsa-LtSe image with high spatial resolution, low temporal resolution and low spectral resolution;
step 2, expressing the multi-temporal HtSe-Lsa image and the Hsa-LtSe image into a three-dimensional tensor form by utilizing the multi-dimensional expression advantage of the tensor; taking the image wave bands which are different in phase as image wave bands with time change information, and superposing and expanding the remote sensing images of the same sensor at different time phases in a spectral dimension to be used as a third dimension;
step 3, constructing a time-space-spectrum tensor relation model of coupling between the target image and the input HtSe-Lsa image and Hsa-LtSe image;
step 4, initializing and solving dictionaries with different dimensions and structure tensors: establishing a time-space-spectrum integrated fusion model based on a variation framework;
and 5, optimizing the dictionaries with different dimensions and the structure tensor to obtain and reconstruct a target image.
Preferably, step 3 specifically comprises the following steps:
step 3.1, utilizing a Tucker decomposition model to separate the target images of multiple time phases
Figure BDA0003021340060000021
A tensor described as three dimensions, said three dimensional tensor represented as:
Figure BDA0003021340060000022
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,
Figure BDA0003021340060000023
is a target image;
Figure BDA0003021340060000024
is a target image
Figure BDA0003021340060000025
The structure tensor of (a) is,
Figure BDA0003021340060000026
wherein n ishNumber of atoms of high-modulus dictionary, nwNumber of atoms, n, of wide-mode dictionarystIs the number of atoms in the time-spectrum mode dictionary;
Figure BDA0003021340060000027
is represented by having a number nhA high modulus dictionary of atoms;
Figure BDA0003021340060000028
is represented by having nwA wide-mode dictionary of atoms;
Figure BDA0003021340060000029
is represented by having a number nstAn atomic time-spectral mode dictionary; the high mode and the wide mode represent the target image
Figure BDA00030213400600000210
The time-spectrum mode represents the target image
Figure BDA00030213400600000211
The time and spectral dimensional information of the optical fiber,
Figure BDA00030213400600000212
the element in the relation model is a relation coefficient between three dictionaries in the time-space-spectrum tensor relation model;
step 3.2, inputting the HtSe-Lsa image with high time resolution, high spectral resolution and low spatial resolution
Figure BDA00030213400600000213
As a target image
Figure BDA00030213400600000214
Spatially degraded image, the degraded image
Figure BDA00030213400600000215
From the target image
Figure BDA00030213400600000216
The high-mode and wide-mode action point spread functions and the down-sampling matrix are obtained:
Figure BDA00030213400600000217
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,
Figure BDA0003021340060000031
is a target image; p1∈Rw×W,P2∈Rh×HFor spatial down-sampling operators along the broad and high modes, respectively, the targetImage forming method
Figure BDA0003021340060000032
Spatially degraded imagery
Figure BDA0003021340060000033
Representing the degradation of the spatial resolution of the fused image and the observation image;
step 3.3, assuming degraded image
Figure BDA0003021340060000034
Under the condition of respectively acting on the wide mode and the high mode, combining the formula (1) and the formula (2) to form the image
Figure BDA0003021340060000035
Write as:
Figure BDA0003021340060000036
in the above formula, the first and second carbon atoms are,
Figure BDA0003021340060000037
down-sampling dictionaries with wide and high modes respectively, and multiplier notation with numbers in the backnA modulo-n product representing the tensor and matrix;
step 3.4, inputting Hsa-LtSe images with high spatial resolution, low temporal resolution and low spectral resolution
Figure BDA0003021340060000038
As a target image
Figure BDA0003021340060000039
Degraded image in time and spectral dimensions, the degraded image
Figure BDA00030213400600000310
From the target image
Figure BDA00030213400600000311
In time and spectrumThe dimension acts on the down-sampling operator to obtain:
Figure BDA00030213400600000312
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix, P3∈Rs×S,P4∈Rt×TRespectively representing the down-sampling operators on the spectral dimension and the time dimension, representing the degradation of the fused image and the observed image on the time-spectral dimension, combining the formula (1) and the formula (4), and converting the HR-MS image into the HR-MS image
Figure BDA00030213400600000313
Write as:
Figure BDA00030213400600000314
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,
Figure BDA00030213400600000315
representing a down-sampled dictionary in the time-spectral dimension.
Preferably, step 4 specifically comprises:
Figure BDA00030213400600000316
in the above formula, | · the luminance | |0And · aFRespectively represent l0Norm and F norm, N is structure tensor
Figure BDA00030213400600000317
The maximum number of non-zero elements in (a);
Figure BDA00030213400600000318
is represented by having a number nhA high modulus dictionary of atoms;
Figure BDA00030213400600000319
is represented by having nwA wide-mode dictionary of atoms;
Figure BDA00030213400600000320
is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure BDA00030213400600000321
is a target image
Figure BDA00030213400600000322
Spatially degraded images;
Figure BDA0003021340060000041
is a target image
Figure BDA0003021340060000042
The structure tensor of (1), the multiplier with the number behindnA modulo-n product representing the tensor and matrix;
Figure BDA0003021340060000043
down-sampling dictionaries of wide mode and high mode respectively;
Figure BDA0003021340060000044
a down-sampling dictionary representing a time-spectrum dimension;
Figure BDA0003021340060000045
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,
Figure BDA0003021340060000046
is represented by having a number nhA high modulus dictionary of atoms;
Figure BDA0003021340060000047
is represented by having nwWide-mode dictionary of individual atoms;
Figure BDA0003021340060000048
Is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure BDA0003021340060000049
a degraded image of the target image chi on the space is obtained;
Figure BDA00030213400600000410
is a target image
Figure BDA00030213400600000411
The structure tensor of (a);
Figure BDA00030213400600000412
down-sampling dictionaries of wide mode and high mode respectively;
Figure BDA00030213400600000413
a down-sampling dictionary representing a time-spectrum dimension; λ is sparse regularization parameter, | ·| non-woven phosphor1Is composed of1And (4) norm.
Preferably, step 5 specifically comprises the following steps:
step 5.1, initializing wide mode and high mode of the image based on a dictionary updating cycle-singular value decomposition algorithm, initializing time-spectrum mode based on a simplest simplex segmentation and augmentation Lagrange algorithm, and initializing structure tensor based on an alternate direction multiplier algorithm
Figure BDA00030213400600000414
Initialization is performed by solving using a near-end iterative optimization (PAO) algorithm
Figure BDA00030213400600000415
W, H, ST, guarantee
Figure BDA00030213400600000416
W, H and ST converge and find out key points; the ratio of W, H, ST,
Figure BDA00030213400600000417
the iterative update of (c) is as follows:
Figure BDA00030213400600000418
in the above formula, the first and second carbon atoms are,
Figure BDA00030213400600000419
given by the objective function (7), β is the near-end weight coefficient, β>0; the parameter with pre subscript represents the value solved in the last iteration;
Figure BDA00030213400600000420
is represented by having a number nhA high modulus dictionary of atoms;
Figure BDA00030213400600000421
is represented by having nwA wide-mode dictionary of atoms;
Figure BDA00030213400600000422
is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure BDA00030213400600000423
is a target image
Figure BDA00030213400600000424
The structure tensor of (a); i | · | purple windFRepresents the F norm;
and 5.2, optimizing the dictionary and the structure tensor: updating and optimizing dictionaries W, H and ST based on a conjugate gradient algorithm, and carrying out structure tensor based on an alternative direction multiplier algorithm
Figure BDA0003021340060000051
Carrying out updating optimization solution; for the solved optimal W, H, ST dictionary and structure tensor
Figure BDA0003021340060000052
Using formulas(1) Estimating to obtain a target image
Figure BDA0003021340060000053
Preferably, the pretreatment means in step 1 comprises: radiometric calibration, atmospheric correction, ortho correction, and image registration.
Preferably, in step 2, the data of the Hsa-LtSe image-missing time phase is filled with 0.
The invention has the beneficial effects that:
(1) according to the invention, the time dimension is overlapped on the spectrum, so that a third dimension can be expanded by using a plurality of long time sequence images, a plurality of high-resolution images can be generated at the same time by one-time fusion, and the integrated fusion of a plurality of images in different time phases is realized.
(2) According to the invention, by utilizing the multi-dimensional expression advantage of tensor, target images of different time phases are overlapped in a spectrum dimension to realize the expansion of the time phases, the time phases are expressed as a three-dimensional tensor form, and the time-space-spectrum fusion problem of a plurality of four-dimensional time phases is converted into a three-dimensional fusion problem, so that on one hand, the calculation efficiency is improved by reducing the dimension, on the other hand, the dimension is reduced, the error accumulation caused by too many variables needing to be solved due to overhigh dimension is reduced, the complexity of a reconstruction model is greatly reduced, and the reconstruction efficiency is improved.
(3) The invention establishes an integrated multi-temporal multi-resolution sensor image coupling relation model. The observed image is considered to be a degraded image of the fusion target image on a time dimension, a spectrum or a space, different degradation functions can be set according to the characteristics of the sensor, and the method is suitable for the fusion of different types of sensor images with different resolutions, such as time-space fusion and space-spectrum fusion, and the fusion of time-space-spectrum, and has strong universality.
(4) The invention introduces sparse prior to the target image, namely the structure tensor is sparse, and the high autocorrelation on the image space and the high redundancy on the spectrum and the adjacent time phase are utilized, so that the high-quality time-space-spectrum fusion image can be obtained.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the following examples. The following examples are set forth merely to aid in the understanding of the invention. It should be noted that, for a person skilled in the art, several modifications can be made to the invention without departing from the principle of the invention, and these modifications and modifications also fall within the protection scope of the claims of the present invention.
The invention utilizes the multidimensional expression advantage of tensor to establish the time, space and spectrum relations among remote sensing images with different space-time spectral resolutions, simultaneously fully considers the space, spectrum and time autocorrelation on the high time-space-spectral resolution image space, has low dimensionality on the spectrum and high redundancy between adjacent time phases, introduces sparse prior to a target image as a regularization item to establish an optimal fusion model, realizes time sequence fusion image integrated output by iterative optimization solution, obtains a high time-space-spectral resolution fusion image with high space-time spectral information fidelity, and realizes the space-time spectrum integrated fusion of the long time sequence remote sensing image. The invention can adopt computer software to realize automatic operation flow.
As an embodiment, a flowchart is shown in fig. 1, and specifically includes the following steps:
step 1, preprocessing a hyperspectral HtSe-Lsa image with high temporal resolution, high spectral resolution and low spatial resolution, and preprocessing an Hsa-LtSe image with high spatial resolution, low temporal resolution and low spectral resolution; the pre-processing includes radiometric calibration, atmospheric correction, orthometric correction, and image registration.
Step 2, expressing the multi-temporal HtSe-Lsa image and the Hsa-LtSe image into a three-dimensional tensor form by utilizing the multi-dimensional expression advantage of the tensor; the remote sensing images of different time phases of the same scene generally have great redundancy, the redundancy is very similar to the low-dimensional property of a single image among different wave bands, the image wave bands among different phases are regarded as image wave bands with time change information, the remote sensing images of the same sensor in different time phases are overlapped and expanded in the spectral dimension to be used as a third dimension, and the data of the Hsa-LtSe image in the missing time phase are filled by using 0;
and 3, decomposing the HtSe-Lsa and Hsa-LtSe data in a tensor form into a structure tensor and three mode matrixes (high mode, wide mode and time-spectrum mode) by using Tucker decomposition. HtSe-Lsa is considered as a degraded image of a target image in space and is obtained by an objective function action point diffusion function and a space down-sampling matrix, HR-MS is a degraded function of the target image in spectrum and time, the spectrum degraded function is obtained by solving a sensor spectrum response function, and the time degraded function is obtained by time phase correlation; constructing a time-space-spectrum tensor relation model of coupling between a target image and an input HtSe-Lsa image and Hsa-LtSe image;
step 3.1, utilizing a Tucker decomposition model to separate the target images of multiple time phases
Figure BDA0003021340060000061
A tensor described as three dimensions, said three dimensional tensor represented as a structure tensor multiplied by a broad-mode, high-mode, time-spectral-mode dictionary:
Figure BDA0003021340060000062
in the above formula, the first and second carbon atoms are,
Figure BDA0003021340060000063
is a target image;
Figure BDA0003021340060000064
is the structure tensor of the target image x,
Figure BDA0003021340060000065
wherein n ishNumber of atoms of high-modulus dictionary, nwNumber of atoms, n, of wide-mode dictionarystIs the number of atoms in the time-spectrum mode dictionary;
Figure BDA0003021340060000066
is represented by having a number nhA high modulus dictionary of atoms;
Figure BDA0003021340060000067
is represented by having nwA wide-mode dictionary of atoms;
Figure BDA0003021340060000068
is represented by having a number nstAn atomic time-spectral mode dictionary; the high mode and the wide mode represent the target image
Figure BDA0003021340060000069
The time-spectrum mode represents the target image
Figure BDA00030213400600000610
The time and spectral dimensional information of the optical fiber,
Figure BDA00030213400600000611
the element in the relation model is a relation coefficient between three dictionaries in the time-space-spectrum tensor relation model;
step 3.2, inputting the HtSe-Lsa image with high time resolution, high spectral resolution and low spatial resolution
Figure BDA0003021340060000071
As a target image
Figure BDA0003021340060000072
Spatially degraded image, the degraded image
Figure BDA0003021340060000073
From the target image
Figure BDA0003021340060000074
The high and wide mode point of action spread functions (PSFs) and the downsampled matrix are obtained:
Figure BDA0003021340060000075
in the above formula, the first and second carbon atoms are,
Figure BDA0003021340060000076
is a target image; p1∈Rw×W,P2∈Rh×HFor spatial down-sampling operators along the wide and high modes, respectively, the target image
Figure BDA0003021340060000077
Spatially degraded imagery
Figure BDA0003021340060000078
Representing the degradation of the spatial resolution of the fused image and the observation image;
step 3.3, assuming degraded image
Figure BDA0003021340060000079
Under the condition of respectively acting on the wide mode and the high mode, combining the formula (1) and the formula (2) to form the image
Figure BDA00030213400600000710
Write as:
Figure BDA00030213400600000711
in the above formula, the first and second carbon atoms are,
Figure BDA00030213400600000712
down-sampling dictionaries of wide mode and high mode respectively; for Point Spread Functions (PSF), the separability assumption is valid, so that many benefits are brought in the calculation of the tensor;
step 3.4, inputting Hsa-LtSe images with high spatial resolution, low temporal resolution and low spectral resolution
Figure BDA00030213400600000713
As a target image
Figure BDA00030213400600000714
Degraded image in time and spectral dimensions, the degraded image
Figure BDA00030213400600000715
From the target image
Figure BDA00030213400600000716
Applying a down-sampling operator in the time and spectral dimensions to obtain:
Figure BDA00030213400600000717
in the above formula, P3∈Rs×S,P4∈Rt×TRespectively representing the down-sampling operators on the spectral dimension and the time dimension, representing the degradation of the fused image and the observed image on the time-spectral dimension, combining the formula (1) and the formula (4), and converting the HR-MS image into the HR-MS image
Figure BDA00030213400600000718
Write as:
Figure BDA00030213400600000719
in the above formula, the first and second carbon atoms are,
Figure BDA00030213400600000720
a down-sampling dictionary representing a time-spectrum dimension; imaging a target with time-space-spectral resolution
Figure BDA00030213400600000721
And the observed image
Figure BDA00030213400600000722
And
Figure BDA00030213400600000723
the relation in space and time-spectrum dimensions is comprehensively expressed in a tensor form, and coupled time-space-spectrum tensor is constructedQuantitative relation model, object image
Figure BDA00030213400600000724
By image
Figure BDA00030213400600000725
And
Figure BDA00030213400600000726
the dictionary (W, H, ST) and its corresponding structure tensor are solved based on the coupling relation model
Figure BDA00030213400600000727
Carrying out reconstruction;
step 4, initializing and solving dictionaries with different dimensions and structure tensors: taking the solution of the fusion image as a ill-conditioned inversion problem, introducing sparse prior to the structure tensor based on a variation framework, and establishing a time-space-spectrum integrated fusion model;
Figure BDA0003021340060000081
in the above formula, | · the luminance | |0And | · | non-conducting phosphorFRespectively represent l0Norm and F norm, N is structure tensor
Figure BDA0003021340060000082
The maximum number of non-zero elements in (a);
Figure BDA0003021340060000083
is represented by having a number nhA high modulus dictionary of atoms;
Figure BDA0003021340060000084
is represented by having nwA wide-mode dictionary of atoms;
Figure BDA0003021340060000085
is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure BDA0003021340060000086
a degraded image of the target image chi on the space is obtained;
Figure BDA0003021340060000087
is a target image
Figure BDA00030213400600000824
The structure tensor of (a);
Figure BDA0003021340060000088
are respectively a wide mode
And a high-modulus downsampling dictionary;
Figure BDA0003021340060000089
a down-sampling dictionary representing a time-spectrum dimension; because the HtSe-Lsa image and the Hsa-LtSe image are respectively subjected to down-sampling in a space dimension and a time-spectrum dimension, the fusion problem is a morbid inversion problem, and some prior information of a target image needs to be introduced to regularize the fusion problem; since the target image has strong sparsity in both the spectral and temporal dimensions and self-similarity in space, the target image is considered to have sparsity in both the spatial and temporal-spectral dimensions, in tensor decomposition
Figure BDA00030213400600000825
Allowing sparse representation on all three dictionaries,
i.e. the structure tensor
Figure BDA00030213400600000810
Is sparse;
Figure BDA00030213400600000811
in the above formula, the first and second carbon atoms are,
Figure BDA00030213400600000812
is represented by having a number nhA high modulus dictionary of atoms;
Figure BDA00030213400600000813
is represented by having nwA wide-mode dictionary of atoms;
Figure BDA00030213400600000814
is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure BDA00030213400600000815
is a target image
Figure BDA00030213400600000816
Spatially degraded images;
Figure BDA00030213400600000817
is a target image
Figure BDA00030213400600000823
The structure tensor of (a);
Figure BDA00030213400600000818
down-sampling dictionaries of wide mode and high mode respectively;
Figure BDA00030213400600000819
a down-sampling dictionary representing a time-spectrum dimension; λ is sparse regularization parameter, | ·| non-woven phosphor1Is 11A norm;
step 5, optimizing dictionaries in different dimensions and structure tensors to obtain and reconstruct a target image; using the solved optimal W, H, ST dictionary and structure tensor
Figure BDA00030213400600000820
And obtaining a target image.
Step 5.1, initializing a wide mode (W) and a high mode (H) of the image based on a dictionary updating cycle-singular value decomposition (DUC-KSVD) algorithm, initializing a time-spectrum mode based on a simplest pure form segmentation augmentation Lagrange (SISAL) algorithm, and initializing a structure tensor based on an alternating direction multiplier Algorithm (ADMM)
Figure BDA00030213400600000821
Initialising, solving since the objective function (7) is non-convex
Figure BDA00030213400600000822
W, H, ST are not unique, but when the other variables are fixed, the objective function is convex for a single variable; thus solving by using a near-end iterative optimization (PAO) algorithm
Figure BDA0003021340060000091
W, H, ST, guarantee
Figure BDA0003021340060000092
Converging W, H and ST under a certain condition and finding out key points; the ratio of W, H, ST,
Figure BDA0003021340060000093
the iterative update of (c) is as follows:
Figure BDA0003021340060000094
in the above formula, the first and second carbon atoms are,
Figure BDA0003021340060000095
given by the objective function (7), β is the near-end weight coefficient, β>0; the parameter with pre subscript represents the value solved in the last iteration;
Figure BDA0003021340060000096
is represented by having a number nhA high modulus dictionary of atoms;
Figure BDA0003021340060000097
is represented by having nwA wide-mode dictionary of atoms;
Figure BDA0003021340060000098
is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure BDA0003021340060000099
is a target image
Figure BDA00030213400600000910
The structure tensor of (a); i | · | purple windFRepresents the F norm;
and 5.2, optimizing the dictionary and the structure tensor: updating and optimizing dictionaries W, H and ST based on a Conjugate Gradient (CG) algorithm, and carrying out structure tensor based on an alternating direction multiplier Algorithm (ADMM)
Figure BDA00030213400600000911
Carrying out updating optimization solution; for the solved optimal W, H, ST dictionary and structure tensor
Figure BDA00030213400600000912
Estimating and obtaining the target image by using the formula (1)
Figure BDA00030213400600000913

Claims (6)

1. A multi-temporal remote sensing image space-time spectrum integrated fusion method based on coupling sparse tensor decomposition is characterized by comprising the following steps:
step 1, preprocessing a hyperspectral HtSe-Lsa image with high temporal resolution, high spectral resolution and low spatial resolution, and preprocessing an Hsa-LtSe image with high spatial resolution, low temporal resolution and low spectral resolution;
step 2, expressing the multi-temporal HtSe-Lsa image and the Hsa-LtSe image into a three-dimensional tensor form by utilizing the multi-dimensional expression advantage of the tensor; taking the image wave bands which are different in phase as image wave bands with time change information, and superposing and expanding the remote sensing images of the same sensor at different time phases in a spectral dimension to be used as a third dimension;
step 3, constructing a time-space-spectrum tensor relation model of coupling between the target image and the input HtSe-Lsa image and Hsa-LtSe image;
step 4, initializing and solving dictionaries with different dimensions and structure tensors: establishing a time-space-spectrum integrated fusion model based on a variation framework;
and 5, optimizing the dictionaries with different dimensions and the structure tensor to obtain and reconstruct a target image.
2. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition as recited in claim 1, wherein the step 3 specifically comprises the following steps:
step 3.1, utilizing a Tucker decomposition model to separate the target images of multiple time phases
Figure FDA0003021340050000011
A tensor described as three dimensions, said three dimensional tensor represented as:
Figure FDA0003021340050000012
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,
Figure FDA0003021340050000013
is a target image;
Figure FDA0003021340050000014
is a target image
Figure FDA0003021340050000015
The structure tensor of (a) is,
Figure FDA0003021340050000016
wherein n ishNumber of atoms of high-modulus dictionary, nwNumber of atoms, n, of wide-mode dictionarystIs the number of atoms in the time-spectrum mode dictionary;
Figure FDA0003021340050000017
is represented by having a number nhOf atomsA high modulus dictionary;
Figure FDA0003021340050000018
is represented by having nwA wide-mode dictionary of atoms;
Figure FDA0003021340050000019
is represented by having a number nstAn atomic time-spectral mode dictionary; the high mode and the wide mode represent the target image
Figure FDA00030213400500000110
The time-spectrum mode represents the target image
Figure FDA00030213400500000111
The time and spectral dimensional information of the optical fiber,
Figure FDA00030213400500000112
the element in the relation model is a relation coefficient between three dictionaries in the time-space-spectrum tensor relation model;
step 3.2, inputting the HtSe-Lsa image with high time resolution, high spectral resolution and low spatial resolution
Figure FDA00030213400500000113
As a target image
Figure FDA00030213400500000114
Spatially degraded image, the degraded image
Figure FDA00030213400500000115
From the target image
Figure FDA00030213400500000116
The high-mode and wide-mode action point spread functions and the down-sampling matrix are obtained:
Figure FDA00030213400500000117
in the above formula, the first and second carbon atoms are,
Figure FDA0003021340050000021
is a target image; multiplier circuit with numerals in backnRepresenting the modulo-n product of the tensor and matrix, P1∈Rw×W,P2∈Rh×HFor spatial down-sampling operators along the wide and high modes, respectively, the target image
Figure FDA0003021340050000022
Spatially degraded imagery
Figure FDA0003021340050000023
Representing the degradation of the spatial resolution of the fused image and the observation image;
step 3.3, assuming degraded image
Figure FDA0003021340050000024
Under the condition of respectively acting on the wide mode and the high mode, combining the formula (1) and the formula (2) to form the image
Figure FDA0003021340050000025
Write as:
Figure FDA0003021340050000026
in the above formula, the first and second carbon atoms are,
Figure FDA0003021340050000027
down-sampling dictionaries with wide and high modes respectively, and multiplier notation with numbers in the backnA modulo-n product representing the tensor and matrix;
step 3.4, inputting Hsa-LtSe images with high spatial resolution, low temporal resolution and low spectral resolution
Figure FDA0003021340050000028
As a target image
Figure FDA0003021340050000029
Degraded image in time and spectral dimensions, the degraded image
Figure FDA00030213400500000210
From the target image
Figure FDA00030213400500000211
Applying a down-sampling operator in the time and spectral dimensions to obtain:
Figure FDA00030213400500000212
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix, P3∈Rs×S,P4∈Rt×TRespectively representing the down-sampling operators on the spectral dimension and the time dimension, representing the degradation of the fused image and the observed image on the time-spectral dimension, combining the formula (1) and the formula (4), and converting the HR-MS image into the HR-MS image
Figure FDA00030213400500000213
Write as:
Figure FDA00030213400500000214
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,
Figure FDA00030213400500000215
representing a down-sampled dictionary in the time-spectral dimension.
3. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition as recited in claim 1, wherein the step 4 specifically comprises the following steps:
Figure FDA00030213400500000216
in the above formula, | · the luminance | |0And | · | non-conducting phosphor0And | · | non-conducting phosphorFRespectively represent l0Norm and F norm, N is structure tensor
Figure FDA00030213400500000217
The maximum number of non-zero elements in (a);
Figure FDA00030213400500000218
is represented by having a number nhA high modulus dictionary of atoms;
Figure FDA00030213400500000219
is represented by having nwA wide-mode dictionary of atoms;
Figure FDA0003021340050000031
is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure FDA0003021340050000032
is a target image
Figure FDA0003021340050000033
Spatially degraded images;
Figure FDA0003021340050000034
is a target image
Figure FDA0003021340050000035
The structure tensor of (a); multiplier circuit with numerals in backnRepresenting the modulo-n product of the tensor and matrix,
Figure FDA0003021340050000036
down-sampling dictionaries of wide mode and high mode respectively;
Figure FDA0003021340050000037
a down-sampling dictionary representing a time-spectrum dimension;
Figure FDA0003021340050000038
in the above formula, the multiplier with numbers in the back is extractednRepresenting the modulo-n product of the tensor and matrix,
Figure FDA0003021340050000039
is represented by having a number nhA high modulus dictionary of atoms;
Figure FDA00030213400500000310
is represented by having nwA wide-mode dictionary of atoms;
Figure FDA00030213400500000311
is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure FDA00030213400500000312
is a target image
Figure FDA00030213400500000313
Spatially degraded images;
Figure FDA00030213400500000314
is a target image
Figure FDA00030213400500000315
The structure tensor of (a);
Figure FDA00030213400500000316
down-sampling dictionaries of wide mode and high mode respectively;
Figure FDA00030213400500000317
a down-sampling dictionary representing a time-spectrum dimension; λ is sparse regularization parameter, | ·| non-woven phosphor1Is 11And (4) norm.
4. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition as recited in claim 2, wherein the step 5 specifically comprises the following steps:
step 5.1, initializing wide mode and high mode of the image based on a dictionary updating cycle-singular value decomposition algorithm, initializing time-spectrum mode based on a simplest simplex segmentation and augmentation Lagrange algorithm, and initializing structure tensor based on an alternate direction multiplier algorithm
Figure FDA00030213400500000318
Initialization is performed by solving using a near-end iterative optimization (PAO) algorithm
Figure FDA00030213400500000319
W, H, ST, guarantee
Figure FDA00030213400500000320
W, H and ST converge and find out key points; the ratio of W, H, ST,
Figure FDA00030213400500000321
the iterative update of (c) is as follows:
Figure FDA00030213400500000322
in the above formula, the first and second carbon atoms are,
Figure FDA00030213400500000323
given by the objective function (7), β is nearEnd weight coefficient, beta>0; the parameter with pre subscript represents the value solved in the last iteration;
Figure FDA00030213400500000324
is represented by having a number nhA high modulus dictionary of atoms;
Figure FDA00030213400500000325
is represented by having nwA wide-mode dictionary of atoms;
Figure FDA00030213400500000326
is represented by having a number nstAn atomic time-spectral mode dictionary;
Figure FDA00030213400500000327
is a target image
Figure FDA0003021340050000041
The structure tensor of (a); i | · | purple windFRepresents the F norm;
and 5.2, optimizing the dictionary and the structure tensor: updating and optimizing dictionaries W, H and ST based on a conjugate gradient algorithm, and carrying out structure tensor based on an alternative direction multiplier algorithm
Figure FDA0003021340050000042
Carrying out updating optimization solution; for the solved optimal W, H, ST dictionary and structure tensor
Figure FDA0003021340050000043
Estimating and obtaining the target image by using the formula (1)
Figure FDA0003021340050000044
5. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition is characterized in that the preprocessing means in the step 1 comprises the following steps: radiometric calibration, atmospheric correction, ortho correction, and image registration.
6. The space-time spectrum integration fusion method for the multi-temporal remote sensing image based on the coupled sparse tensor decomposition is characterized in that: in step 2, the data of the Hsa-LtSe image missing phase is filled with 0.
CN202110403626.7A 2021-04-15 2021-04-15 Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition Active CN113112591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110403626.7A CN113112591B (en) 2021-04-15 2021-04-15 Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110403626.7A CN113112591B (en) 2021-04-15 2021-04-15 Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition

Publications (2)

Publication Number Publication Date
CN113112591A true CN113112591A (en) 2021-07-13
CN113112591B CN113112591B (en) 2022-08-26

Family

ID=76717048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110403626.7A Active CN113112591B (en) 2021-04-15 2021-04-15 Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition

Country Status (1)

Country Link
CN (1) CN113112591B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989406A (en) * 2021-12-28 2022-01-28 成都理工大学 Tomography gamma scanning image reconstruction method based on sparse tensor dictionary learning
CN115346004A (en) * 2022-10-18 2022-11-15 深圳市规划和自然资源数据管理中心(深圳市空间地理信息中心) Remote sensing time sequence data reconstruction method combining space-time reconstruction and CUDA acceleration

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064554A1 (en) * 2011-11-14 2014-03-06 San Diego State University Research Foundation Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery
US20140301659A1 (en) * 2013-04-07 2014-10-09 Bo Li Panchromatic Sharpening Method of Spectral Image Based on Fusion of Overall Structural Information and Spatial Detail Information
CN106651820A (en) * 2016-09-23 2017-05-10 西安电子科技大学 Sparse tensor neighborhood embedding-based remote sensing image fusion method
CN106780345A (en) * 2017-01-18 2017-05-31 西北工业大学 Based on the hyperspectral image super-resolution reconstruction method that coupling dictionary and space conversion are estimated
CN107977951A (en) * 2017-12-25 2018-05-01 咸阳师范学院 The multispectral and hyperspectral image fusion method decomposed based on Coupling Tensor
CN111340080A (en) * 2020-02-19 2020-06-26 济南大学 High-resolution remote sensing image fusion method and system based on complementary convolution characteristics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064554A1 (en) * 2011-11-14 2014-03-06 San Diego State University Research Foundation Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery
US20140301659A1 (en) * 2013-04-07 2014-10-09 Bo Li Panchromatic Sharpening Method of Spectral Image Based on Fusion of Overall Structural Information and Spatial Detail Information
CN106651820A (en) * 2016-09-23 2017-05-10 西安电子科技大学 Sparse tensor neighborhood embedding-based remote sensing image fusion method
CN106780345A (en) * 2017-01-18 2017-05-31 西北工业大学 Based on the hyperspectral image super-resolution reconstruction method that coupling dictionary and space conversion are estimated
CN107977951A (en) * 2017-12-25 2018-05-01 咸阳师范学院 The multispectral and hyperspectral image fusion method decomposed based on Coupling Tensor
CN111340080A (en) * 2020-02-19 2020-06-26 济南大学 High-resolution remote sensing image fusion method and system based on complementary convolution characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUANFENG SHEN 等: "An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING ( VOLUME: 54, ISSUE: 12, DEC. 2016)》 *
孟祥超 等: "基于多分辨率分析的GF-5和GF-1遥感影像空—谱融合", 《遥感学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989406A (en) * 2021-12-28 2022-01-28 成都理工大学 Tomography gamma scanning image reconstruction method based on sparse tensor dictionary learning
CN113989406B (en) * 2021-12-28 2022-04-01 成都理工大学 Tomography gamma scanning image reconstruction method based on sparse tensor dictionary learning
CN115346004A (en) * 2022-10-18 2022-11-15 深圳市规划和自然资源数据管理中心(深圳市空间地理信息中心) Remote sensing time sequence data reconstruction method combining space-time reconstruction and CUDA acceleration

Also Published As

Publication number Publication date
CN113112591B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
Cheng et al. Sparse representation based pansharpening using trained dictionary
CN114119444B (en) Multi-source remote sensing image fusion method based on deep neural network
CN113112591B (en) Multi-temporal remote sensing image space-time spectrum fusion method based on coupling sparse tensor decomposition
CN108520495B (en) Hyperspectral image super-resolution reconstruction method based on clustering manifold prior
Sdraka et al. Deep learning for downscaling remote sensing images: Fusion and super-resolution
CN114998167B (en) High-spectrum and multi-spectrum image fusion method based on space-spectrum combined low rank
Liu et al. A practical pan-sharpening method with wavelet transform and sparse representation
Aghamaleki et al. Image fusion using dual tree discrete wavelet transform and weights optimization
CN117252761A (en) Cross-sensor remote sensing image super-resolution enhancement method
CN115496662A (en) High-order tensor spectral image super-resolution reconstruction method based on spectral information fusion
Lu et al. Hyper-sharpening based on spectral modulation
Wen et al. A novel spatial fidelity with learnable nonlinear mapping for panchromatic sharpening
Mei et al. Lightweight multiresolution feature fusion network for spectral super-resolution
Xu et al. Degradation-aware dynamic fourier-based network for spectral compressive imaging
Shi et al. A pansharpening method based on hybrid-scale estimation of injection gains
CN109785281B (en) Spectrum mapping based gray level amplitude modulation panning method
CN114140359B (en) Remote sensing image fusion sharpening method based on progressive cross-scale neural network
Divekar et al. Image fusion by compressive sensing
Cui et al. Meta-TR: meta-attention spatial compressive imaging network with swin transformer
Zhang et al. Considering nonoverlapped bands construction: A general dictionary learning framework for hyperspectral and multispectral image fusion
Yuan et al. Hyperspectral and multispectral image fusion using non-convex relaxation low rank and total variation regularization
Rostami et al. Hyperspectral image super-resolution via learning an undercomplete dictionary and intra-algorithmic postprocessing
Zheng et al. An unsupervised hyperspectral image fusion method based on spectral unmixing and deep learning
Xie et al. Two-stage fusion based CNN for hyperspectral pansharpening
Lei et al. An interpretable deep neural network for panchromatic and multispectral image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Sun Weiwei

Inventor after: Zhou Jun

Inventor after: Meng Xiangchao

Inventor after: Yang Gang

Inventor before: Sun Weiwei

Inventor before: Zhou Jun

Inventor before: Meng Xiangchao

Inventor before: Yang Gang

GR01 Patent grant
GR01 Patent grant