CN115272084B - High-resolution image reconstruction method and device - Google Patents

High-resolution image reconstruction method and device Download PDF

Info

Publication number
CN115272084B
CN115272084B CN202211179917.3A CN202211179917A CN115272084B CN 115272084 B CN115272084 B CN 115272084B CN 202211179917 A CN202211179917 A CN 202211179917A CN 115272084 B CN115272084 B CN 115272084B
Authority
CN
China
Prior art keywords
image
time
phase
resolution
time phase
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211179917.3A
Other languages
Chinese (zh)
Other versions
CN115272084A (en
Inventor
李家
周钰谦
张秋燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202211179917.3A priority Critical patent/CN115272084B/en
Publication of CN115272084A publication Critical patent/CN115272084A/en
Application granted granted Critical
Publication of CN115272084B publication Critical patent/CN115272084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method and a device for reconstructing high resolution of an image, which comprises the following steps: acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, and the first time phase low-resolution image and the third time phase low-resolution image are paired high-resolution images in the same region and the same time phase; obtaining a transition residual image through a super-resolution convolutional neural network; extracting a convolutional neural network through the bias characteristics to obtain sensor deviation; and obtaining a second time-phase high-resolution reconstructed image through the time fusion convolutional neural network. The method is based on sensor error correction and space-time data fusion, and can effectively improve the high-resolution reconstruction visual effect of the image. In addition, an image high-resolution reconstruction device, equipment and a storage medium are also provided.

Description

High-resolution image reconstruction method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for reconstructing a high resolution image based on sensor error correction and spatio-temporal data fusion, an electronic device, and a storage medium.
Background
The remote sensing image time sequence has important application in the fields of agricultural remote sensing monitoring, agricultural rural informatization realization and the like, when the agricultural remote sensing monitoring or the informatization processing of the same agricultural area is carried out, the comprehensive analysis is usually carried out on the remote sensing satellite images which pass through the area at different times, however, the irreconcilable contradiction exists between the time resolution and the spatial resolution of the common remote sensing data. For example, although the spatial resolution of the Landsat data is high, the temporal resolution is relatively low, and is easily affected by cloud and rainy weather, and the like, so that effective data cannot be acquired in a critical period of crop detection, while the spatial resolution of the MODIS data is relatively low although the temporal resolution is high, so that a problem of mixed pixels exists in crop classification, and therefore, the landfilled data is not suitable for an area with a complex planting structure, broken landscape, and strong heterogeneity. The prior art expects to process the different remote sensing data by fusing spatio-temporal data from the satellite images of two sensors, sensor one with very high temporal resolution but coarse spatial resolution, and sensor two with very high spatial resolution but lower temporal resolution, the fused output of the spatio-temporal data being a composite image sequence of the temporal resolution of sensor one and the spatial resolution of sensor two.
However, the existing spatiotemporal fusion method has the following problems: (1) Existing spatio-temporal fusion methods typically reconstruct high-resolution images under the assumption that image changes can be directly transferred from one sensor to another, which does not take into account differences in the ability of different sensors to characterize changes, which can lead to spectral and spatial distortions in the reconstructed image; (2) The existing space-time fusion method generally adopts a linear weighting method to directly fuse time characteristics, the linear weighting method does not fully consider the change characteristics of all pixels, and the characterization capability of image characteristics is limited.
Therefore, a new high resolution image reconstruction method is needed to solve the above problems.
Disclosure of Invention
The application provides an image high-resolution reconstruction method and device, electronic equipment and a storage medium, and based on sensor error correction and space-time data fusion, the image high-resolution reconstruction visual effect can be effectively improved.
According to a first aspect, the present invention provides a high resolution image reconstruction method, comprising the following steps:
acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, the first time phase low-resolution image is provided with a first time phase high-resolution image which is paired in the same region and the same time phase, and the third time phase low-resolution image is provided with a third time phase high-resolution image which is paired in the same region and the same time phase;
inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image;
inputting the first two-time phase transition residual image into a bias feature extraction convolutional neural network to obtain a first two-time phase sensor deviation, and inputting the second three-time phase transition residual image into the bias feature extraction convolutional neural network to obtain a second three-time phase sensor deviation;
adding the first time-phase high-resolution image, the first two-time phase transition residual image and the first two-time phase sensor deviation to obtain a second time-phase forward transition high-resolution image, and adding the third time-phase high-resolution image, the second three-time phase transition residual image and the second three-time phase sensor deviation to obtain a second time-phase backward transition high-resolution image;
and inputting the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image into a time fusion convolution neural network to obtain a second time phase high-resolution reconstruction image.
Optionally, the inputting the first time-phase low-resolution image and the second time-phase low-resolution image into the super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and the inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image specifically include:
using a formula of a loss function
Figure 245046DEST_PATH_IMAGE001
Training the super-resolution convolutional neural network, wherein,
Figure 924420DEST_PATH_IMAGE002
a mapping function representing a super-resolution convolutional neural network,
Figure 119253DEST_PATH_IMAGE003
a training weight parameter representing the mapping function,
Figure 276696DEST_PATH_IMAGE004
to train the heart
Figure 243515DEST_PATH_IMAGE005
Time-phase high-resolution image
Figure 93791DEST_PATH_IMAGE006
And a first step of
Figure 778850DEST_PATH_IMAGE007
Time-phase high-resolution image
Figure 996774DEST_PATH_IMAGE008
The difference of (a) to (b),
Figure 818100DEST_PATH_IMAGE009
to train the heart
Figure 901593DEST_PATH_IMAGE005
Time-phase low-resolution image
Figure 11632DEST_PATH_IMAGE010
And a first
Figure 42036DEST_PATH_IMAGE007
Time phase low resolutionImage forming method
Figure 983447DEST_PATH_IMAGE011
The difference of (a) to (b),
Figure 500491DEST_PATH_IMAGE012
wherein
Figure 769930DEST_PATH_IMAGE013
The length of the euclidean model is expressed,
Figure 728659DEST_PATH_IMAGE014
is composed of
Figure 399943DEST_PATH_IMAGE015
Figure 356397DEST_PATH_IMAGE016
The resolution in the length of the image is,
Figure 237766DEST_PATH_IMAGE017
is composed of
Figure 347323DEST_PATH_IMAGE015
Figure 669851DEST_PATH_IMAGE016
Resolution over image width;
calculating the image difference between the first time-phase low-resolution image and the second time-phase low-resolution image, and inputting the image difference into a trained super-resolution convolutional neural network to obtain a first two-time-phase transition residual image;
and calculating the image difference between the second time-phase low-resolution image and the third time-phase low-resolution image, and inputting the image difference into the trained super-resolution convolutional neural network to obtain a second three-time-phase transition residual image.
Optionally, the extracting the first two-time phase transition residual image input offset feature into the convolutional neural network to obtain a first two-time phase sensor bias, and the extracting the second three-time phase transition residual image input offset feature into the convolutional neural network to obtain a second three-time phase sensor bias specifically includes:
using a loss function
Figure 797207DEST_PATH_IMAGE018
Training the bias feature extraction convolutional neural network, wherein,
Figure 431450DEST_PATH_IMAGE019
a mapping function representing a bias feature extraction convolutional neural network,
Figure 341769DEST_PATH_IMAGE020
training weight parameters representing bias feature extraction convolutional neural network mapping functions,
Figure 47032DEST_PATH_IMAGE021
to train the heart
Figure 142027DEST_PATH_IMAGE022
Time-phase high-resolution image
Figure 873354DEST_PATH_IMAGE023
And a first
Figure 711997DEST_PATH_IMAGE024
Time-phase high-resolution image
Figure 212379DEST_PATH_IMAGE025
The difference of (a) to (b),
Figure 681538DEST_PATH_IMAGE026
is the ith, j time phase transition residual error image;
inputting the first two-time phase transition residual error image into the trained bias feature extraction convolutional neural network to obtain a first two-time phase sensor deviation;
and inputting the second three-time phase transition residual image into the trained bias feature extraction convolutional neural network to obtain a second three-time phase sensor deviation.
Optionally, the bias feature extraction convolutional neural network includes an input layer, three convolutional concealment layers, and an output layer, where the three convolutional concealment layers correspond to a feature extraction operation, a nonlinear mapping operation, and a reconstruction operation, respectively.
Optionally, the inputting the second time-phase forward transition high-resolution image and the second time-phase backward transition high-resolution image into the time fusion convolutional neural network to obtain the second time-phase high-resolution reconstructed image specifically includes:
using a Bicubic interpolation method to perform upsampling on the first time phase low-resolution image, the second time phase low-resolution image and the third time phase low-resolution image to obtain a first time phase upsampled image, a second time phase upsampled image and a third time phase upsampled image, wherein the upsampling proportion is the same as the sampling proportion of the image after high-resolution reconstruction;
subtracting the first time phase up-sampled image from the second time phase up-sampled image to obtain a first two-time phase up-sampled residual image, and subtracting the second time phase up-sampled image from the third time phase up-sampled image to obtain a second three-time phase up-sampled residual image;
using a loss function
Figure 586364DEST_PATH_IMAGE028
Training the time-fused convolutional neural network, wherein,
Figure 900802DEST_PATH_IMAGE029
a mapping function representing a time-fused convolutional neural network,
Figure 990112DEST_PATH_IMAGE030
a training weight parameter representing the mapping function,
Figure 630171DEST_PATH_IMAGE031
representing the first in the training set
Figure 598740DEST_PATH_IMAGE031
The time phase is,
Figure 654552DEST_PATH_IMAGE032
is a high-resolution image of the j-th phase,
Figure 332789DEST_PATH_IMAGE033
is the forward transition high resolution picture from the ith phase at the jth phase,
Figure 674909DEST_PATH_IMAGE034
is a backward-transitional high resolution image from the kth phase at the jth phase,
Figure 808736DEST_PATH_IMAGE035
sampling residual images for the ith, j-th phase,
Figure 261714DEST_PATH_IMAGE036
sampling residual image at j, k time phase,
Figure 387933DEST_PATH_IMAGE037
respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,
Figure 369796DEST_PATH_IMAGE038
is a picture structure similarity function, wherein
Figure 50307DEST_PATH_IMAGE039
Respectively represent images
Figure 569626DEST_PATH_IMAGE040
The mean value of all the elements in (c),
Figure 753613DEST_PATH_IMAGE041
respectively representing images
Figure 906377DEST_PATH_IMAGE040
The standard deviation of all the elements in (A),
Figure 667660DEST_PATH_IMAGE042
representing an image
Figure 868965DEST_PATH_IMAGE040
The covariance of the medium elements is determined,
Figure 32093DEST_PATH_IMAGE043
two very small constants, the prevention denominator is 0;
and inputting the second time phase forward transition high-resolution image, the second time phase backward transition high-resolution image, the first two-time phase up-sampling residual image and the two-three time phase up-sampling residual image into a trained time fusion convolution neural network to obtain a second time phase high-resolution reconstructed image.
Optionally, the time-fusion convolutional neural network includes an input layer, three convolutional concealment layers, and an output layer;
the input layer is used for stacking the first two-time phase up-sampling residual error image, the second three-time phase up-sampling residual error image, the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image together to form a four-channel data input model;
three hidden layers respectively correspond to the feature extraction operation, the nonlinear mapping operation and the weight extraction operation, and one hidden layer corresponding to the weight extraction outputs
Figure 284652DEST_PATH_IMAGE044
Tensor (A)
Figure 798810DEST_PATH_IMAGE045
Wherein
Figure 272648DEST_PATH_IMAGE046
The length and width resolution of the high-resolution image;
the output formula of the output layer is
Figure 962387DEST_PATH_IMAGE047
In the formula (I), the compound is shown in the specification,
Figure 922865DEST_PATH_IMAGE048
for the second-phase high-resolution reconstructed image,
Figure 393160DEST_PATH_IMAGE049
for the second phase forward transition high resolution image,
Figure 201848DEST_PATH_IMAGE050
the high resolution image is backward-transitioned for the second phase,
Figure 11672DEST_PATH_IMAGE051
representing all elements being 1
Figure 349243DEST_PATH_IMAGE044
The number of tensors is such that,
Figure 716289DEST_PATH_IMAGE045
one for output of the previous layer
Figure 859826DEST_PATH_IMAGE044
Tensor of "
Figure 852053DEST_PATH_IMAGE052
"indicates multiplication of elements at corresponding positions.
According to a second aspect, the present invention provides an image high resolution reconstruction apparatus, comprising:
the image acquisition module is used for acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, the first time phase low-resolution image is provided with a first time phase high-resolution image which is paired in the same region and the same time phase, and the third time phase low-resolution image is provided with a third time phase high-resolution image which is paired in the same region and the same time phase;
the transition residual image data processing module is used for inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image;
the sensor deviation data processing module is used for inputting the first two-time phase transition residual error image into a bias characteristic extraction convolutional neural network to obtain a first two-time phase sensor deviation, and inputting the second three-time phase transition residual error image into the bias characteristic extraction convolutional neural network to obtain a second three-time phase sensor deviation;
the transition high-resolution image data processing module is used for adding the first time-phase high-resolution image, the first two-time phase transition residual image and the first two-time phase sensor deviation to obtain a second time-phase forward transition high-resolution image, and adding the third time-phase high-resolution image, the second three-time phase transition residual image and the second three-time phase sensor deviation to obtain a second time-phase backward transition high-resolution image;
and the high-resolution reconstruction image data processing module is used for inputting the second time-phase forward transition high-resolution image and the second time-phase backward transition high-resolution image into a time fusion convolution neural network to obtain a second time-phase high-resolution reconstruction image.
Optionally, the high-resolution reconstructed image data processing module includes:
the up-sampling image data processing submodule is used for up-sampling the first time phase low-resolution image, the second time phase low-resolution image and the third time phase low-resolution image by using a Bicubic interpolation method to obtain a first time phase up-sampling image, a second time phase up-sampling image and a third time phase up-sampling image, wherein the up-sampling proportion is the same as the sampling proportion of the image after high-resolution reconstruction;
the up-sampling residual image data processing sub-module is used for subtracting the first time phase up-sampling image from the second time phase up-sampling image to obtain a first two-time phase up-sampling residual image, and subtracting the second time phase up-sampling image from the third time phase up-sampling image to obtain a second three-time phase up-sampling residual image;
a time-fused convolutional neural network training submodule for using the loss function
Figure 688422DEST_PATH_IMAGE053
Training the time-fused convolutional neural network, wherein,
Figure 274255DEST_PATH_IMAGE029
a mapping function representing a time-fused convolutional neural network,
Figure 546449DEST_PATH_IMAGE054
a training weight parameter representing the mapping function,
Figure 596444DEST_PATH_IMAGE055
representing the first in the training set
Figure 541398DEST_PATH_IMAGE055
The time phase is,
Figure 473582DEST_PATH_IMAGE056
is a high-resolution image of the j-th phase,
Figure 427762DEST_PATH_IMAGE057
is the forward transition high resolution picture from the ith phase at the jth phase,
Figure 394581DEST_PATH_IMAGE058
is a backward-transitional high-resolution image from the kth phase at the jth phase,
Figure 259505DEST_PATH_IMAGE059
sampling residual images for the ith, j-th phase,
Figure 944564DEST_PATH_IMAGE060
is jthThe residual image is sampled at the time phase k,
Figure 499174DEST_PATH_IMAGE061
respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,
Figure 195865DEST_PATH_IMAGE062
is a picture structure similarity function, wherein
Figure 607255DEST_PATH_IMAGE063
Respectively representing images
Figure 717294DEST_PATH_IMAGE064
The mean value of all the elements in (c),
Figure 744768DEST_PATH_IMAGE065
respectively representing images
Figure 686179DEST_PATH_IMAGE064
The standard deviation of all the elements in (A),
Figure 143836DEST_PATH_IMAGE066
representing an image
Figure 741171DEST_PATH_IMAGE064
The covariance of the medium elements is determined,
Figure 434320DEST_PATH_IMAGE067
two very small constants, the prevent denominator is 0;
and the high-resolution reconstructed image data output submodule is used for inputting the second time phase forward transition high-resolution image, the second time phase backward transition high-resolution image, the first two-time phase up-sampling residual image and the two-three time phase up-sampling residual image into the trained time fusion convolution neural network to obtain a second time phase high-resolution reconstructed image.
According to a third aspect, the invention provides an electronic device comprising: a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement a high resolution image reconstruction method according to the first aspect.
According to a fourth aspect, the present invention provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the method for high resolution reconstruction of images according to the first aspect.
The invention has the beneficial effects that:
the invention fully considers the difference of different sensors, learns the error of the two sensors by training the bias characteristic convolution neural network, and improves the high-resolution reconstruction precision of the image. The invention also fully considers that the image to be reconstructed is a time sequence data, and the influence of different time length changes on the reconstructed date image is different, on one hand, the influence degree of the image difference caused by the time change at each pixel point on the reconstructed date image is learned through training the time fusion convolution neural network, and the influence degree is taken as the time fusion weight of the pixel point, so that the weight expression force is stronger and more accurate than that of the traditional method which directly uses the residual image value and uses a linear function to calculate the weight; on the other hand, an SSIM structure similarity function is added in the model loss function besides the classical MSE loss function, the closer the SSIM value is to 1, the higher the similarity of the two image structures is, the less the least square loss of the optimized model is ensured, the better structure similarity with the real picture is kept, and the better visual effect is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a flow chart of a method for high resolution image reconstruction according to an embodiment;
FIG. 2 is a block diagram of a logical framework of a high resolution image reconstruction method according to an embodiment;
FIG. 3 is a flow diagram of obtaining a high resolution reconstructed image through a time-fused convolutional neural network in one embodiment;
FIG. 4 is a diagram illustrating an actual predicted effect of a high resolution image reconstruction method according to an embodiment;
fig. 5 is a diagram illustrating an architecture of an image high resolution reconstruction device according to another embodiment.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is noted that the terms "comprises," "comprising," and "having" and any variations thereof in the description and claims of this application and the drawings described above are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. In the claims, the description and the drawings of the specification of the present application, relational terms such as "first" and "second", and the like, may be used solely to distinguish one entity/action/object from another entity/action/object without necessarily requiring or implying any actual such relationship or order between such entities/actions/objects.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
Fig. 1 is a flowchart of a high resolution image reconstruction method according to an embodiment of the present disclosure. Fig. 2 is a schematic diagram of a logical framework of a high resolution image reconstruction method according to an embodiment of the present disclosure.
Referring to fig. 1 and fig. 2, in an embodiment, a method for reconstructing an image with high resolution is provided, including:
step 100, obtaining a first time phase low resolution image
Figure 374113DEST_PATH_IMAGE068
Second time-phase low-resolution image
Figure 127306DEST_PATH_IMAGE069
And a third time-phase low-resolution image
Figure 149619DEST_PATH_IMAGE070
The first time phase, the second time phase and the third time phase are arranged in sequence according to a time sequence, wherein the first time phase is a low-resolution image
Figure 787405DEST_PATH_IMAGE068
First time phase high resolution image with same time phase in same paired region
Figure 844354DEST_PATH_IMAGE071
Third time-phase low-resolution image
Figure 437622DEST_PATH_IMAGE070
Third time-phase high-resolution image with same time phase of same region and matched pair
Figure 212811DEST_PATH_IMAGE072
It should be noted that, as an example, in the embodiment of the present invention, a remote sensing image of two sensors for a specific planting structure area is obtained first, where an image of a sensor i is a high resolution image, and an image sequence is
Figure 123129DEST_PATH_IMAGE073
The image of the second sensor is the low resolution image sequence matched with the second sensor
Figure 628060DEST_PATH_IMAGE074
The paired high-low resolution image sequences form a training sample set, and the training sample set is used for training the neural network designed by the invention.
In addition to the low resolution images with the paired high resolution images, there are still a large number of high resolution images without the paired high resolution images in the low resolution image sequence, and the method of the present invention will reconstruct the paired high resolution images for them. For clarity and brief description of the embodiments of the present invention, in the following embodiments, the reconstruction portion is only three low resolution image sequences with different phases
Figure 589632DEST_PATH_IMAGE075
And two high-resolution image sequences paired
Figure 180013DEST_PATH_IMAGE076
To reconstruct and
Figure 894023DEST_PATH_IMAGE077
paired high spatial resolution imagery
Figure 722301DEST_PATH_IMAGE078
The method for reconstructing the high resolution image based on the sensor error correction and the spatio-temporal data fusion is described by way of example. By analogy, the unpaired high spatial resolution image in the low resolution image sequence can be reconstructed, so that high time and high spatial resolution images can be obtainedA sequence of spatially resolved images. In order to understand the consistency of the meanings of various physical quantities, the same parametric representation is used for the training set parametric representation in the training phase of the neural network in each step and for the parametric representation in the steps of prediction, recognition and reconstruction based on the neural network. This is a brief way of understanding for a person skilled in the art.
Step 200, the first time phase low resolution image is processed
Figure 988198DEST_PATH_IMAGE068
And a second time-phase low-resolution image
Figure 941241DEST_PATH_IMAGE069
Inputting the super-resolution convolutional neural network to obtain a first and second time phase transition residual image
Figure 315066DEST_PATH_IMAGE079
Second time phase low resolution image
Figure 935535DEST_PATH_IMAGE069
And a third time-phase low-resolution image
Figure 372332DEST_PATH_IMAGE070
Inputting the super-resolution convolutional neural network to obtain a second three-time phase transition residual image
Figure 812672DEST_PATH_IMAGE080
In one embodiment, step 200 further includes:
step 201, using a loss function formula
Figure 196380DEST_PATH_IMAGE081
Training a super-resolution convolutional neural network, wherein,
Figure 795989DEST_PATH_IMAGE082
a mapping function representing a super-resolution convolutional neural network,
Figure 281983DEST_PATH_IMAGE083
a training weight parameter representing the mapping function,
Figure 334253DEST_PATH_IMAGE084
to train the heart
Figure 193756DEST_PATH_IMAGE085
Time-phase high-resolution image
Figure 647871DEST_PATH_IMAGE086
And a first
Figure 629733DEST_PATH_IMAGE087
Time-phase high-resolution image
Figure 776156DEST_PATH_IMAGE088
The difference of (a) to (b),
Figure 298404DEST_PATH_IMAGE089
to train the heart
Figure 747971DEST_PATH_IMAGE085
Time-phase low-resolution image
Figure 697473DEST_PATH_IMAGE090
And a first
Figure 927597DEST_PATH_IMAGE091
Time-phase low-resolution image
Figure 863323DEST_PATH_IMAGE092
The difference of (a) to (b),
Figure 306679DEST_PATH_IMAGE093
in which
Figure 302448DEST_PATH_IMAGE094
The length of the euclidean model is expressed,
Figure 551027DEST_PATH_IMAGE095
is composed of
Figure 290444DEST_PATH_IMAGE096
Figure 839237DEST_PATH_IMAGE097
The resolution in the length of the image is,
Figure 2977DEST_PATH_IMAGE098
is composed of
Figure 942114DEST_PATH_IMAGE096
Figure 609856DEST_PATH_IMAGE097
Resolution over the image width;
step 202, calculate a first time-phase low-resolution image
Figure 622943DEST_PATH_IMAGE099
And a second time-phase low-resolution image
Figure 85148DEST_PATH_IMAGE100
Difference in image of
Figure 449264DEST_PATH_IMAGE101
Differentiating the image
Figure 923627DEST_PATH_IMAGE102
Inputting the trained super-resolution convolution neural network to obtain a first and second time phase transition residual image
Figure 119116DEST_PATH_IMAGE103
In the formula
Figure 627589DEST_PATH_IMAGE104
Representing the weight parameters of the trained super-resolution convolutional neural network; computing a second time-phase low-resolution image
Figure 603635DEST_PATH_IMAGE105
And a third time-phase low-resolution image
Figure 488545DEST_PATH_IMAGE106
Difference in image of
Figure 335279DEST_PATH_IMAGE107
The image is differed
Figure 277302DEST_PATH_IMAGE108
Inputting the trained super-resolution convolutional neural network to obtain a second three-phase transition residual image
Figure 209486DEST_PATH_IMAGE109
It should be noted that the Super-Resolution Convolutional Neural Network (SRCNN) adopted in the present embodiment belongs to the prior art, and is described in detail in document Dong, c., et al.
Step 300, the first two-time phase transition residual error image
Figure 898088DEST_PATH_IMAGE110
The input bias characteristic is extracted from the convolutional neural network to obtain the first and second time phase sensor deviation
Figure 802590DEST_PATH_IMAGE111
The second three-time phase transition residual error image
Figure 43078DEST_PATH_IMAGE112
Extracting the convolution neural network from the input bias characteristics to obtain the second three-time phase sensor deviation
Figure 329135DEST_PATH_IMAGE113
In one embodiment, step 300 further includes:
step 301, using a loss function
Figure 352586DEST_PATH_IMAGE114
Training bias features to extract a convolutional neural network, where,
Figure 173912DEST_PATH_IMAGE115
a mapping function representing a bias feature extraction convolutional neural network,
Figure 460668DEST_PATH_IMAGE116
training weight parameters representing bias feature extraction convolutional neural network mapping functions,
Figure 570706DEST_PATH_IMAGE117
to train the first
Figure 725744DEST_PATH_IMAGE118
Time-phase high-resolution image
Figure 274013DEST_PATH_IMAGE119
And a first step of
Figure 121883DEST_PATH_IMAGE007
Time-phase high-resolution image
Figure 391321DEST_PATH_IMAGE120
The difference of (a) to (b),
Figure 84471DEST_PATH_IMAGE121
is the ith, j time phase transition residual error image;
step 302, convert the first two-phase transition residual image into a second two-phase transition residual image
Figure 755755DEST_PATH_IMAGE122
Inputting the trained bias characteristics to extract the convolutional neural network to obtain the first two-time phase sensor deviation
Figure 246297DEST_PATH_IMAGE123
In the formula (I), wherein,
Figure 330928DEST_PATH_IMAGE124
representing the trained bias characteristics to extract the weight parameters of the convolutional neural network; the second three-time phase transition residual error image
Figure 703135DEST_PATH_IMAGE125
Inputting the trained bias characteristics to extract the convolutional neural network to obtain the second three-time phase sensor deviation
Figure 353559DEST_PATH_IMAGE126
It should be noted that, in the training process of the convolutional neural network for extracting bias features,
Figure 153019DEST_PATH_IMAGE127
and calculating by using the trained super-resolution convolutional neural network according to the steps 200 and 202.
In one embodiment, the biased feature extraction convolutional neural network comprises an input layer, three convolutional concealment layers and an output layer, the three convolutional concealment layers corresponding to an extracted feature operation, a nonlinear mapping operation and a reconstruction operation, respectively.
Step 400, the first time phase high resolution image is processed
Figure 925278DEST_PATH_IMAGE128
First and second time phase transition residual image
Figure 101176DEST_PATH_IMAGE129
And first two time phase sensor offset
Figure 871686DEST_PATH_IMAGE130
Adding to obtain a second time-phase forward transition high-resolution image
Figure 107626DEST_PATH_IMAGE131
The third time-phase high-resolution image
Figure 698007DEST_PATH_IMAGE132
Second three-time phase transition residual image
Figure 426665DEST_PATH_IMAGE133
And a second three-phase sensor offset
Figure 254944DEST_PATH_IMAGE134
Adding to obtain a second time-phase backward transition high-resolution image
Figure 520840DEST_PATH_IMAGE135
Step 500, forward-transition the second time phase to the high resolution image
Figure 739463DEST_PATH_IMAGE136
And second phase backward transition high resolution video
Figure 116218DEST_PATH_IMAGE137
Inputting the time fusion convolution neural network to obtain a second time-phase high-resolution reconstructed image
Figure 799003DEST_PATH_IMAGE138
Fig. 2 is a schematic diagram of a logical framework in an image high resolution reconstruction method according to an embodiment of the present disclosure.
Referring to fig. 3, in an embodiment, step 500 further includes:
step 501, using Bicubic interpolation method to perform first time phase low resolution image
Figure 904975DEST_PATH_IMAGE139
Second time-phase low-resolution image
Figure 673210DEST_PATH_IMAGE140
And a third time-phase low-resolution image
Figure 729022DEST_PATH_IMAGE141
Performing upsampling to obtain a first time phase upsampled image
Figure 63052DEST_PATH_IMAGE142
Sampling the image in the second time phase
Figure 546117DEST_PATH_IMAGE143
And third phase upsampled images
Figure 863966DEST_PATH_IMAGE144
Wherein the up-sampling proportion is the same as the sampling proportion of the image after the high-resolution reconstruction;
step 502, up-sampling the first time phase to obtain an image
Figure 726398DEST_PATH_IMAGE142
And a second time phase up-sampled image
Figure 914934DEST_PATH_IMAGE145
Making a difference to obtain a first two-time phase up-sampling residual image
Figure 896796DEST_PATH_IMAGE146
The second time phase is up-sampled to obtain an image
Figure 577308DEST_PATH_IMAGE145
And a third time phase up-sampling the image
Figure 99556DEST_PATH_IMAGE147
Making a difference to obtain a second three-time phase up-sampling residual image
Figure 280614DEST_PATH_IMAGE148
Step 503, using the loss function
Figure 230115DEST_PATH_IMAGE149
Training a time-fused convolutional neural network, where,
Figure 132343DEST_PATH_IMAGE150
a mapping function representing a time-fused convolutional neural network,
Figure 192703DEST_PATH_IMAGE151
a training weight parameter representing the mapping function,
Figure 965618DEST_PATH_IMAGE152
representing the first in the training set
Figure 617180DEST_PATH_IMAGE152
The time phase is,
Figure 466756DEST_PATH_IMAGE153
is a j-th phase high-resolution image,
Figure 330807DEST_PATH_IMAGE154
is the forward transition high resolution picture from the ith phase at the jth phase,
Figure 489387DEST_PATH_IMAGE155
is a backward-transitional high resolution image from the kth phase at the jth phase,
Figure 780691DEST_PATH_IMAGE156
sampling residual images for the ith, j-th phase,
Figure 391932DEST_PATH_IMAGE157
sampling residual image at j, k time phase,
Figure 56744DEST_PATH_IMAGE158
the i, j, k time phase up-sampled images are obtained by up-sampling the i, j, k time phase low resolution images,
Figure 335410DEST_PATH_IMAGE159
is a picture structure similarity function, wherein
Figure 797615DEST_PATH_IMAGE160
Respectively representing images
Figure 896152DEST_PATH_IMAGE161
The mean value of all the elements in (c),
Figure 305268DEST_PATH_IMAGE162
respectively representing images
Figure 972528DEST_PATH_IMAGE161
The standard deviation of all the elements in (A),
Figure 74477DEST_PATH_IMAGE163
representing an image
Figure 784944DEST_PATH_IMAGE161
The covariance of the medium elements is determined,
Figure 404275DEST_PATH_IMAGE164
two very small constants, the prevention denominator is 0;
step 504, forward transition the second time phase to the high resolution image
Figure 516587DEST_PATH_IMAGE165
Second time phase backward transition high resolution image
Figure 461541DEST_PATH_IMAGE166
First two-time phase up-sampling residual image
Figure 656374DEST_PATH_IMAGE167
And two-three time phase up-sampling residual image
Figure 79396DEST_PATH_IMAGE168
Inputting the trained time fusion convolution neural network to obtain a second time-phase high-resolution reconstructed image
Figure 249478DEST_PATH_IMAGE169
To say thatIt is clear that, in the training process of the time-fused convolutional neural network,
Figure 489966DEST_PATH_IMAGE170
Figure 581550DEST_PATH_IMAGE171
calculated according to step 400.
It should be noted that, in the above description of the high-resolution image reconstruction method, only one, two, and three time phases are used, but in the training of three convolutional neural networks, any three images in the image set may be combined into a combination of three time phases, i, j, and k correspond to the one, two, and three time phases, respectively.
In one embodiment, a time-fused convolutional neural network comprises an input layer, three convolutional hidden layers, and an output layer;
the input layer is used for stacking the first two-time phase up-sampling residual image, the second three-time phase up-sampling residual image, the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image together to form a four-channel data input model;
the three hidden layers respectively correspond to the feature extraction operation, the nonlinear mapping operation and the weight extraction operation, and one hidden layer corresponding to the weight extraction is output
Figure 354070DEST_PATH_IMAGE172
Tensor of
Figure 581920DEST_PATH_IMAGE173
Wherein
Figure 868676DEST_PATH_IMAGE174
High resolution image length and width resolution;
the output layer has the output formula of
Figure 978715DEST_PATH_IMAGE175
In the formula (I), the compound is shown in the specification,
Figure 868173DEST_PATH_IMAGE176
for the second-phase high-resolution reconstructed image,
Figure 947600DEST_PATH_IMAGE177
for the second phase forward transition high resolution image,
Figure 529891DEST_PATH_IMAGE178
the high resolution image is backward-transitioned for the second phase,
Figure 127226DEST_PATH_IMAGE179
representing all elements being 1
Figure 961321DEST_PATH_IMAGE180
The number of tensors is such that,
Figure 491659DEST_PATH_IMAGE181
for one output of the previous layer
Figure 120218DEST_PATH_IMAGE180
Tensor of (a)'
Figure 207778DEST_PATH_IMAGE182
"indicates multiplication of elements at corresponding positions.
Refer to FIG. 4, wherein
Figure 704619DEST_PATH_IMAGE183
For low-resolution image sequences acquired from two different sensors
Figure 230409DEST_PATH_IMAGE184
And high resolution image sequence
Figure 888924DEST_PATH_IMAGE185
Reconstructed and
Figure 398533DEST_PATH_IMAGE186
paired high resolution imagery which ensures both high resolution and no-imageThe image sequences have the same resolution, and the change information of the low-resolution time sequence images along with time is kept; by analogy, the unpaired high spatial resolution image in the low resolution image sequence can be reconstructed, so that the high temporal and high spatial resolution image sequence can be obtained.
Referring to fig. 5, in an embodiment, an apparatus for reconstructing a high resolution image is provided, including:
an image acquisition module for acquiring a first time phase low resolution image
Figure 433486DEST_PATH_IMAGE187
Second time-phase low-resolution image
Figure 810853DEST_PATH_IMAGE188
And a third time-phase low-resolution image
Figure 640268DEST_PATH_IMAGE189
The first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, wherein the first time phase has a low resolution
Figure 371595DEST_PATH_IMAGE187
First time phase high resolution image with same time phase of same region and same pair
Figure 475817DEST_PATH_IMAGE190
Third-phase low-resolution image
Figure 445042DEST_PATH_IMAGE191
Third time-phase high-resolution image with same time phase of same region and matched pair
Figure 702149DEST_PATH_IMAGE192
A transition residual image data processing module for processing the first time phase low resolution image
Figure 858455DEST_PATH_IMAGE187
And a second time-phase low-resolution image
Figure 704051DEST_PATH_IMAGE188
Inputting the super-resolution convolutional neural network to obtain a first and second time phase transition residual image
Figure 183574DEST_PATH_IMAGE193
And applying the second time phase low resolution image
Figure 761317DEST_PATH_IMAGE188
And third-phase low-resolution images
Figure 57782DEST_PATH_IMAGE191
Inputting the super-resolution convolutional neural network to obtain a second three-time phase transition residual image
Figure 379173DEST_PATH_IMAGE194
A sensor deviation data processing module for processing the first and second time phase transition residual images
Figure 713202DEST_PATH_IMAGE193
The input bias characteristic is extracted from the convolutional neural network to obtain the first and second time phase sensor deviation
Figure 196267DEST_PATH_IMAGE195
And the second three-time phase transition residual image
Figure 248537DEST_PATH_IMAGE194
The input bias characteristic is extracted from the convolutional neural network to obtain the second three-time phase sensor deviation
Figure 845390DEST_PATH_IMAGE196
A transitional high resolution image data processing module for processing the first time phase high resolution image
Figure 299505DEST_PATH_IMAGE197
First and second time phase transition residual image
Figure 953471DEST_PATH_IMAGE079
And first two time phase sensor offset
Figure 696299DEST_PATH_IMAGE198
Adding to obtain a second time-phase forward transition high-resolution image
Figure 218548DEST_PATH_IMAGE199
And applying the third time-phase high resolution image
Figure 402535DEST_PATH_IMAGE200
And the second three-time phase transition residual image
Figure 349107DEST_PATH_IMAGE201
And a second three-phase sensor offset
Figure 251335DEST_PATH_IMAGE202
Adding to obtain a second phase backward transition high-resolution image
Figure 577274DEST_PATH_IMAGE203
A high-resolution reconstruction image data processing module for forward-transiting the second phase to the high-resolution image
Figure 943665DEST_PATH_IMAGE199
And second phase backward transition high resolution video
Figure 939434DEST_PATH_IMAGE203
Inputting the time fusion convolution neural network to obtain a second time-phase high-resolution reconstructed image
Figure 140344DEST_PATH_IMAGE204
In one embodiment, the transition residual image data processing module is configured to implement:
using a formula for a loss function
Figure 207657DEST_PATH_IMAGE205
Training a super-resolution convolutional neural network, wherein,
Figure 897396DEST_PATH_IMAGE206
a mapping function representing a super-resolution convolutional neural network,
Figure 454279DEST_PATH_IMAGE207
a training weight parameter representing the mapping function,
Figure 65520DEST_PATH_IMAGE208
to train the heart
Figure 936524DEST_PATH_IMAGE209
Time-phase high-resolution image
Figure 805735DEST_PATH_IMAGE210
And a first
Figure 674465DEST_PATH_IMAGE211
Time-phase high-resolution image
Figure 897636DEST_PATH_IMAGE212
The difference of (a) to (b),
Figure 978856DEST_PATH_IMAGE213
to train the first
Figure 174345DEST_PATH_IMAGE209
Time-phase low-resolution image
Figure 338610DEST_PATH_IMAGE214
And a first
Figure 661794DEST_PATH_IMAGE215
Time-phase low-resolution image
Figure 874600DEST_PATH_IMAGE216
The difference of (a) to (b),
Figure 986913DEST_PATH_IMAGE217
wherein
Figure 931866DEST_PATH_IMAGE218
The length of the euclidean model is expressed,
Figure 129629DEST_PATH_IMAGE219
is composed of
Figure 880548DEST_PATH_IMAGE220
Figure 719803DEST_PATH_IMAGE221
The resolution in the length of the image is,
Figure 694713DEST_PATH_IMAGE222
is composed of
Figure 520717DEST_PATH_IMAGE220
Figure 872064DEST_PATH_IMAGE221
Resolution over image width;
computing a first time-phase low-resolution image
Figure 568756DEST_PATH_IMAGE223
And a second time-phase low-resolution image
Figure 971357DEST_PATH_IMAGE224
Difference in image of
Figure 81395DEST_PATH_IMAGE225
The image is differed
Figure 377378DEST_PATH_IMAGE226
Inputting the trained super-resolution convolution neural network to obtain a first and second time phase transition residual image
Figure 990893DEST_PATH_IMAGE227
In the formula
Figure 714130DEST_PATH_IMAGE228
Representing the weight parameters of the trained super-resolution convolutional neural network; computing a second time-phase low resolution image
Figure 980639DEST_PATH_IMAGE224
And a third time-phase low-resolution image
Figure 673788DEST_PATH_IMAGE229
Difference in image of
Figure 79493DEST_PATH_IMAGE230
Differentiating the image
Figure 832685DEST_PATH_IMAGE231
Inputting the trained super-resolution convolutional neural network to obtain a second three-phase transition residual image
Figure 589420DEST_PATH_IMAGE232
In one embodiment, the sensor deviation data processing module is configured to implement:
using a loss function
Figure 495714DEST_PATH_IMAGE233
Training the bias features to extract a convolutional neural network, where,
Figure 146139DEST_PATH_IMAGE234
a mapping function representing a bias feature extraction convolutional neural network,
Figure 945598DEST_PATH_IMAGE235
training weight parameters representing bias feature extraction convolutional neural network mapping functions,
Figure 314263DEST_PATH_IMAGE236
to train the first
Figure 224581DEST_PATH_IMAGE237
Time-phase high-resolution image
Figure 726582DEST_PATH_IMAGE238
And a first step of
Figure 24840DEST_PATH_IMAGE239
Time-phase high-resolution image
Figure 756166DEST_PATH_IMAGE240
The difference of (a) to (b),
Figure 329230DEST_PATH_IMAGE241
is the ith, j time phase transition residual error image;
the first two-time phase transition residual image
Figure 829613DEST_PATH_IMAGE242
Inputting the trained bias characteristics to extract the convolutional neural network to obtain the first two-time phase sensor deviation
Figure 361088DEST_PATH_IMAGE243
In the formula (I), wherein,
Figure 328780DEST_PATH_IMAGE124
representing the trained bias characteristics to extract the weight parameters of the convolutional neural network; the second three-time phase transition residual error image
Figure 174376DEST_PATH_IMAGE244
Inputting the trained bias characteristics to extract the convolutional neural network to obtain the second three-time phase sensor deviation
Figure 653899DEST_PATH_IMAGE245
It should be noted that, in the training of the bias feature extraction convolutional neural network,
Figure 966063DEST_PATH_IMAGE246
and calculating by using the trained super-resolution convolutional neural network from the training set data.
In one embodiment, the biased feature extraction convolutional neural network comprises an input layer, three convolutional concealment layers and an output layer, the three convolutional concealment layers corresponding to an extracted feature operation, a nonlinear mapping operation and a reconstruction operation, respectively.
In one embodiment, the high resolution reconstructed image data processing module comprises:
an up-sampling image data processing sub-module for processing the first time phase low resolution image by Bicubic interpolation
Figure 531037DEST_PATH_IMAGE247
Second time phase low resolution image
Figure 318340DEST_PATH_IMAGE248
And a third time-phase low-resolution image
Figure 917948DEST_PATH_IMAGE249
Performing upsampling to obtain a first time phase upsampled image
Figure 666592DEST_PATH_IMAGE250
Sampling the image in the second time phase
Figure 718862DEST_PATH_IMAGE251
And a third time phase up-sampled image
Figure 312786DEST_PATH_IMAGE252
Wherein the up-sampling proportion is the same as the sampling proportion of the image after the high-resolution reconstruction;
an up-sampling residual image data processing sub-module for up-sampling the first time phase
Figure 970163DEST_PATH_IMAGE250
And a second time phase up-sampled image
Figure 423797DEST_PATH_IMAGE251
Making a difference to obtain a first two-time phase up-sampling residual image
Figure 166625DEST_PATH_IMAGE253
And upsampling the second time phase to obtain an image
Figure 360977DEST_PATH_IMAGE251
And a third time phase up-sampling the image
Figure 404019DEST_PATH_IMAGE252
Making a difference to obtain a second three-time phase up-sampling residual image
Figure 228887DEST_PATH_IMAGE254
A time-fused convolutional neural network training submodule for using the loss function
Figure 659344DEST_PATH_IMAGE255
Training a time-fused convolutional neural network, where,
Figure 719704DEST_PATH_IMAGE256
a mapping function representing a time-fused convolutional neural network,
Figure 554936DEST_PATH_IMAGE257
a training weight parameter representing the mapping function,
Figure 81863DEST_PATH_IMAGE258
representing the first in the training set
Figure 197019DEST_PATH_IMAGE258
The time phase is,
Figure 733174DEST_PATH_IMAGE259
is a j-th phase high-resolution image,
Figure 157333DEST_PATH_IMAGE260
is the forward transition high resolution picture from the ith phase at the jth phase,
Figure 120741DEST_PATH_IMAGE261
is a backward-transitional high resolution image from the kth phase at the jth phase,
Figure 856616DEST_PATH_IMAGE262
sampling residual images for the ith, j-th phase,
Figure 662373DEST_PATH_IMAGE263
sampling residual image at j, k time phase,
Figure 206618DEST_PATH_IMAGE264
respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,
Figure 544190DEST_PATH_IMAGE265
is a picture structure similarity function, wherein
Figure 173885DEST_PATH_IMAGE266
Respectively representing images
Figure 648247DEST_PATH_IMAGE267
The mean value of all the elements in (c),
Figure 46999DEST_PATH_IMAGE268
respectively representing images
Figure 555472DEST_PATH_IMAGE267
The standard deviation of all the elements in (A),
Figure 360DEST_PATH_IMAGE269
representing an image
Figure 885270DEST_PATH_IMAGE267
Middle yuanThe covariance of the elements is determined by the covariance,
Figure 401178DEST_PATH_IMAGE270
two very small constants, the prevention denominator is 0;
a high-resolution reconstruction image data output submodule for forward-transiting the second phase to the high-resolution image
Figure 470765DEST_PATH_IMAGE271
Second time phase backward transition high resolution image
Figure 12736DEST_PATH_IMAGE272
First two-time phase up-sampling residual image
Figure 232496DEST_PATH_IMAGE273
And two-three time phase up-sampling residual image
Figure 199315DEST_PATH_IMAGE274
Inputting the trained time fusion convolution neural network to obtain a second time-phase high-resolution reconstructed image
Figure 860976DEST_PATH_IMAGE275
It should be noted that, in the training of the time-fused convolutional neural network,
Figure 155822DEST_PATH_IMAGE276
Figure 179273DEST_PATH_IMAGE277
the transition high-resolution image data processing module processes the training set data to calculate.
In the above description of the high-resolution image reconstruction apparatus, only one, two, and three time phases are used, but in the training of three convolutional neural networks, any three images in the set of images may be combined into a combination of three time phases, i, j, and k correspond to the one, two, and three time phases, respectively.
In one embodiment, a time-fused convolutional neural network includes an input layer, three convolutional concealment layers, and an output layer;
the input layer is used for stacking the first two-time phase up-sampling residual image, the second three-time phase up-sampling residual image, the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image together to form a four-channel data input model;
the three hidden layers respectively correspond to the feature extraction operation, the nonlinear mapping operation and the weight extraction operation, and one hidden layer corresponding to the weight extraction is output
Figure 599DEST_PATH_IMAGE278
Tensor of
Figure 753266DEST_PATH_IMAGE279
In which
Figure 66567DEST_PATH_IMAGE280
High resolution image length and width resolution;
the output layer has the output formula of
Figure 893709DEST_PATH_IMAGE281
In the formula (I), the compound is shown in the specification,
Figure 569541DEST_PATH_IMAGE282
for the second-phase high-resolution reconstructed image,
Figure 761619DEST_PATH_IMAGE283
for the second phase forward transition high resolution image,
Figure 830725DEST_PATH_IMAGE284
the high resolution image is backward-transitioned for the second phase,
Figure 258295DEST_PATH_IMAGE285
representing all elements being 1
Figure 726317DEST_PATH_IMAGE286
The number of tensors is such that,
Figure 89296DEST_PATH_IMAGE287
one for output of the previous layer
Figure 374259DEST_PATH_IMAGE286
Tensor of "
Figure 605520DEST_PATH_IMAGE288
"represents multiplication of elements at corresponding positions.
In an embodiment, an electronic device is proposed, which comprises a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute one of the steps of the image high resolution reconstruction method of the above-mentioned embodiments.
In one embodiment, a computer-readable storage medium is provided, which stores a computer program, and when the computer program is executed by a processor, the processor executes the steps of the image high resolution reconstruction method in the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a non-volatile computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for reconstructing an image with high resolution is characterized by comprising the following steps:
acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, the first time phase low-resolution image is provided with a first time phase high-resolution image which is paired in the same region and the same time phase, and the third time phase low-resolution image is provided with a third time phase high-resolution image which is paired in the same region and the same time phase;
inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image;
inputting the first two-time phase transition residual image into a bias characteristic extraction convolutional neural network to obtain a first two-time phase sensor deviation, and inputting the second three-time phase transition residual image into a bias characteristic extraction convolutional neural network to obtain a second three-time phase sensor deviation;
adding the first time phase high-resolution image, the first two-time phase transition residual image and the first two-time phase sensor deviation to obtain a second time phase forward transition high-resolution image, and adding the third time phase high-resolution image, the second three-time phase transition residual image and the second three-time phase sensor deviation to obtain a second time phase backward transition high-resolution image;
and inputting the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image into a time fusion convolution neural network to obtain a second time phase high-resolution reconstruction image.
2. The method according to claim 1, wherein the inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-phase transition residual image, and the inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-phase transition residual image specifically comprises:
using a formula of a loss function
Figure 868038DEST_PATH_IMAGE001
Training the super-resolution convolutional neural network, wherein,
Figure 598228DEST_PATH_IMAGE002
a mapping function representing a super-resolution convolutional neural network,
Figure 515981DEST_PATH_IMAGE003
a training weight parameter representing the mapping function,
Figure 537289DEST_PATH_IMAGE004
to train the heart
Figure 495536DEST_PATH_IMAGE005
Time-phase high-resolution image
Figure 662207DEST_PATH_IMAGE006
And a first
Figure 804606DEST_PATH_IMAGE007
Time-phase high-resolution image
Figure 613293DEST_PATH_IMAGE008
The difference of (a) to (b),
Figure 748084DEST_PATH_IMAGE009
to train the heart
Figure 85655DEST_PATH_IMAGE005
Time-phase low-resolution image
Figure 574405DEST_PATH_IMAGE010
And a first step of
Figure 390046DEST_PATH_IMAGE007
Time-phase low-resolution image
Figure 647852DEST_PATH_IMAGE011
The difference of (a) to (b),
Figure 436552DEST_PATH_IMAGE012
wherein
Figure 147019DEST_PATH_IMAGE013
The length of the euclidean model is expressed,
Figure 766351DEST_PATH_IMAGE014
is composed of
Figure 19609DEST_PATH_IMAGE015
Figure 558037DEST_PATH_IMAGE016
The resolution in the length of the image is,
Figure 893816DEST_PATH_IMAGE017
is composed of
Figure 175893DEST_PATH_IMAGE015
Figure 283657DEST_PATH_IMAGE016
Resolution over image width;
calculating the image difference between the first time-phase low-resolution image and the second time-phase low-resolution image, and inputting the image difference into a trained super-resolution convolutional neural network to obtain a first two-time-phase transition residual image;
and calculating the image difference between the second time-phase low-resolution image and the third time-phase low-resolution image, and inputting the image difference into the trained super-resolution convolutional neural network to obtain a second three-time-phase transition residual image.
3. The method as claimed in claim 1, wherein the extracting the bias features from the first two-phase transition residual image input is performed by a convolutional neural network to obtain a first two-phase sensor bias, and the extracting the bias features from the second three-phase transition residual image input is performed by a convolutional neural network to obtain a second three-phase sensor bias:
using a loss function
Figure 868354DEST_PATH_IMAGE018
Training the bias feature extraction convolutional neural network, wherein,
Figure 697288DEST_PATH_IMAGE019
a mapping function representing a bias feature extraction convolutional neural network,
Figure 455160DEST_PATH_IMAGE020
representing the training weight parameters of the bias feature extraction convolutional neural network mapping function,
Figure 542064DEST_PATH_IMAGE021
to train the heart
Figure 359979DEST_PATH_IMAGE022
Time-phase high-resolution image
Figure 407700DEST_PATH_IMAGE023
And a first
Figure 169595DEST_PATH_IMAGE024
Time-phase high-resolution image
Figure 845427DEST_PATH_IMAGE025
The difference of (a) to (b),
Figure 568664DEST_PATH_IMAGE026
is the ith, j time phase transition residual image;
inputting the first two-time phase transition residual error image into the trained bias feature extraction convolutional neural network to obtain a first two-time phase sensor deviation;
and inputting the second three-time phase transition residual image into the trained bias feature extraction convolutional neural network to obtain a second three-time phase sensor deviation.
4. The method of claim 3, wherein the biased feature extraction convolutional neural network comprises an input layer, three convolutional concealment layers and an output layer, and the three convolutional concealment layers correspond to the feature extraction operation, the nonlinear mapping operation and the reconstruction operation, respectively.
5. The method as claimed in claim 1, wherein the inputting the second-phase forward transition high-resolution image and the second-phase backward transition high-resolution image into a time fusion convolutional neural network to obtain the second-phase high-resolution reconstructed image specifically includes:
using a Bicubic interpolation method to perform upsampling on the first time phase low-resolution image, the second time phase low-resolution image and the third time phase low-resolution image to obtain a first time phase upsampled image, a second time phase upsampled image and a third time phase upsampled image, wherein the upsampling proportion is the same as the sampling proportion of the image after high-resolution reconstruction;
subtracting the first time phase up-sampled image from the second time phase up-sampled image to obtain a first two-time phase up-sampled residual image, and subtracting the second time phase up-sampled image from the third time phase up-sampled image to obtain a second three-time phase up-sampled residual image;
using a loss function
Figure 962736DEST_PATH_IMAGE028
Training the time-fused convolutional neural network, wherein,
Figure 796831DEST_PATH_IMAGE029
a mapping function representing a time-fused convolutional neural network,
Figure 318381DEST_PATH_IMAGE030
a training weight parameter representing the mapping function,
Figure 946939DEST_PATH_IMAGE031
representing the first in the training set
Figure 828307DEST_PATH_IMAGE031
The time phase is,
Figure 934935DEST_PATH_IMAGE032
is a j-th phase high-resolution image,
Figure 585359DEST_PATH_IMAGE033
is the forward transition high resolution picture from the ith phase at the jth phase,
Figure 381889DEST_PATH_IMAGE034
is a backward-transitional high resolution image from the kth phase at the jth phase,
Figure 484974DEST_PATH_IMAGE035
sampling residual images for the ith, j-th phase,
Figure 926451DEST_PATH_IMAGE036
sampling residual image at j, k time phase,
Figure 837907DEST_PATH_IMAGE037
respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,
Figure 667322DEST_PATH_IMAGE038
is a picture structure similarity function, wherein
Figure 401579DEST_PATH_IMAGE039
Respectively represent images
Figure 974643DEST_PATH_IMAGE040
The average value of all the elements in (1),
Figure 475025DEST_PATH_IMAGE041
respectively represent images
Figure 475342DEST_PATH_IMAGE042
The standard deviation of all the elements in (a),
Figure 693965DEST_PATH_IMAGE043
representing an image
Figure 474315DEST_PATH_IMAGE044
The covariance of the medium elements is determined,
Figure 688258DEST_PATH_IMAGE045
two very small constants, the prevent denominator is 0;
and inputting the second time phase forward transition high-resolution image, the second time phase backward transition high-resolution image, the first two-time phase up-sampling residual image and the two-three time phase up-sampling residual image into a trained time fusion convolution neural network to obtain a second time phase high-resolution reconstructed image.
6. The method according to claim 5, wherein the time-fused convolutional neural network comprises an input layer, three convolutional concealment layers and an output layer;
the input layer is used for stacking the first two-time phase up-sampling residual image, the second three-time phase up-sampling residual image, the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image together to form a four-channel data input model;
three hidden layers respectively correspond to the feature extraction operation, the nonlinear mapping operation and the weight extraction operation, and one hidden layer corresponding to the weight extraction outputs
Figure 266001DEST_PATH_IMAGE046
Tensor of
Figure 830975DEST_PATH_IMAGE047
In which
Figure 417945DEST_PATH_IMAGE048
The length and width resolution of the high-resolution image;
the output formula of the output layer is
Figure 579672DEST_PATH_IMAGE049
In the formula (I), the compound is shown in the specification,
Figure 187371DEST_PATH_IMAGE050
for the second-phase high-resolution reconstructed image,
Figure 646165DEST_PATH_IMAGE051
for the second phase forward transition high resolution image,
Figure 771247DEST_PATH_IMAGE052
the second phase backward transition high resolution image,
Figure 628957DEST_PATH_IMAGE053
representing all elements being 1
Figure 814082DEST_PATH_IMAGE054
The number of tensors is such that,
Figure 229014DEST_PATH_IMAGE055
for one output of the previous layer
Figure 751262DEST_PATH_IMAGE054
Tensor of "
Figure 935250DEST_PATH_IMAGE056
"represents multiplication of elements at corresponding positions.
7. An apparatus for high resolution reconstruction of an image, comprising:
the image acquisition module is used for acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, the first time phase low-resolution image is provided with a first time phase high-resolution image which is paired in the same region and the same time phase, and the third time phase low-resolution image is provided with a third time phase high-resolution image which is paired in the same region and the same time phase;
the transition residual image data processing module is used for inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image;
the sensor deviation data processing module is used for inputting the first two-time phase transition residual error image into a bias feature extraction convolutional neural network to obtain a first two-time phase sensor deviation, and inputting the second three-time phase transition residual error image into the bias feature extraction convolutional neural network to obtain a second three-time phase sensor deviation;
the transition high-resolution image data processing module is used for adding the first time-phase high-resolution image, the first two-time phase transition residual image and the first two-time phase sensor deviation to obtain a second time-phase forward transition high-resolution image, and adding the third time-phase high-resolution image, the second three-time phase transition residual image and the second three-time phase sensor deviation to obtain a second time-phase backward transition high-resolution image;
and the high-resolution reconstruction image data processing module is used for inputting the second time-phase forward transition high-resolution image and the second time-phase backward transition high-resolution image into a time fusion convolution neural network to obtain a second time-phase high-resolution reconstruction image.
8. The apparatus for high-resolution reconstruction of images according to claim 7, wherein said high-resolution reconstructed image data processing module comprises:
the up-sampling image data processing submodule is used for up-sampling the first time phase low-resolution image, the second time phase low-resolution image and the third time phase low-resolution image by using a Bicubic interpolation method to obtain a first time phase up-sampling image, a second time phase up-sampling image and a third time phase up-sampling image, wherein the up-sampling proportion is the same as the sampling proportion of the image after high-resolution reconstruction;
the up-sampling residual image data processing sub-module is used for subtracting the first time phase up-sampling image from the second time phase up-sampling image to obtain a first two-time phase up-sampling residual image, and subtracting the second time phase up-sampling image from the third time phase up-sampling image to obtain a second three-time phase up-sampling residual image;
a time-fused convolutional neural network training submodule for using the loss function
Figure 887681DEST_PATH_IMAGE057
Training the time-fused convolutional neural network, wherein,
Figure 524330DEST_PATH_IMAGE058
a mapping function representing a time-fused convolutional neural network,
Figure 850269DEST_PATH_IMAGE059
a training weight parameter representing the mapping function,
Figure 888764DEST_PATH_IMAGE060
representing the first in the training set
Figure 9166DEST_PATH_IMAGE060
The time phase is,
Figure 395761DEST_PATH_IMAGE061
is a high-resolution image of the j-th phase,
Figure 259812DEST_PATH_IMAGE062
is the forward transition high resolution picture from the ith phase at the jth phase,
Figure 152812DEST_PATH_IMAGE063
is a backward-transitional high resolution image from the kth phase at the jth phase,
Figure 709696DEST_PATH_IMAGE064
sampling residual images for the ith, j-th phase,
Figure 320937DEST_PATH_IMAGE065
sampling residual image at j, k time phase,
Figure 988678DEST_PATH_IMAGE066
respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,
Figure 727397DEST_PATH_IMAGE067
is a picture structure similarity function, wherein
Figure 455181DEST_PATH_IMAGE068
Respectively representing images
Figure 553718DEST_PATH_IMAGE069
The average value of all the elements in (1),
Figure 759572DEST_PATH_IMAGE070
respectively representing images
Figure 627165DEST_PATH_IMAGE069
The standard deviation of all the elements in (a),
Figure 522921DEST_PATH_IMAGE071
representing an image
Figure 702230DEST_PATH_IMAGE069
The covariance of the medium elements is determined,
Figure 852719DEST_PATH_IMAGE072
two very small constants, the prevention denominator is 0;
and the high-resolution reconstructed image data output submodule is used for inputting the second time phase forward transition high-resolution image, the second time phase backward transition high-resolution image, the first two-time phase up-sampling residual image and the two-three time phase up-sampling residual image into the trained time fusion convolution neural network to obtain a second time phase high-resolution reconstructed image.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor executing the program to implement a high resolution reconstruction method of images as claimed in any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, the program being executable by a processor for implementing a method for high resolution reconstruction of images as claimed in any one of claims 1 to 6.
CN202211179917.3A 2022-09-27 2022-09-27 High-resolution image reconstruction method and device Active CN115272084B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211179917.3A CN115272084B (en) 2022-09-27 2022-09-27 High-resolution image reconstruction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211179917.3A CN115272084B (en) 2022-09-27 2022-09-27 High-resolution image reconstruction method and device

Publications (2)

Publication Number Publication Date
CN115272084A CN115272084A (en) 2022-11-01
CN115272084B true CN115272084B (en) 2022-12-16

Family

ID=83756820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211179917.3A Active CN115272084B (en) 2022-09-27 2022-09-27 High-resolution image reconstruction method and device

Country Status (1)

Country Link
CN (1) CN115272084B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727197A (en) * 2019-01-03 2019-05-07 云南大学 A kind of medical image super resolution ratio reconstruction method
CN110164148A (en) * 2019-05-28 2019-08-23 成都信息工程大学 A kind of urban road crossing traffic lights intelligently matches period control method and control system
CN111754403A (en) * 2020-06-15 2020-10-09 南京邮电大学 Image super-resolution reconstruction method based on residual learning
CN111932457A (en) * 2020-08-06 2020-11-13 北方工业大学 High-space-time fusion processing algorithm and device for remote sensing image
CN114841856A (en) * 2022-03-07 2022-08-02 中国矿业大学 Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727197A (en) * 2019-01-03 2019-05-07 云南大学 A kind of medical image super resolution ratio reconstruction method
CN110164148A (en) * 2019-05-28 2019-08-23 成都信息工程大学 A kind of urban road crossing traffic lights intelligently matches period control method and control system
CN111754403A (en) * 2020-06-15 2020-10-09 南京邮电大学 Image super-resolution reconstruction method based on residual learning
CN111932457A (en) * 2020-08-06 2020-11-13 北方工业大学 High-space-time fusion processing algorithm and device for remote sensing image
CN114841856A (en) * 2022-03-07 2022-08-02 中国矿业大学 Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image Super-Resolution with Non-Local Sparse Attention;Yiqun Mei .etc;《Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition》;20211102;3517-3526 *
基于改进残差亚像素卷积神经网络的超分辨率图像重建方法研究;李岚等;《长春师范大学学报》;20200820(第08期);23-29 *
基于深度学习与超分辨率重建的遥感高时空融合方法;张永梅等;《计算机工程与科学》;20200915(第09期);1578-1586 *

Also Published As

Publication number Publication date
CN115272084A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN110969577B (en) Video super-resolution reconstruction method based on deep double attention network
González-Audícana et al. A low computational-cost method to fuse IKONOS images using the spectral response function of its sensors
Xu et al. HAM-MFN: Hyperspectral and multispectral image multiscale fusion network with RAP loss
Li et al. Sparse-based reconstruction of missing information in remote sensing images from spectral/temporal complementary information
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
Wang et al. Enhanced deep blind hyperspectral image fusion
CN115511767B (en) Self-supervised learning multi-modal image fusion method and application thereof
Dou et al. Medical image super-resolution via minimum error regression model selection using random forest
Yang et al. Image super-resolution reconstruction based on improved Dirac residual network
Nercessian et al. Multiresolution decomposition schemes using the parameterized logarithmic image processing model with application to image fusion
CN113408540B (en) Synthetic aperture radar image overlap area extraction method and storage medium
Wang et al. Medical image super-resolution analysis with sparse representation
CN115272084B (en) High-resolution image reconstruction method and device
Lei et al. Convolution neural network with edge structure loss for spatiotemporal remote sensing image fusion
CN111950496B (en) Mask person identity recognition method
Yang et al. Fast multisensor infrared image super-resolution scheme with multiple regression models
CN110689510B (en) Sparse representation-based image fusion method introducing dictionary information
Yang et al. Multi-semi-couple super-resolution method for edge computing
CN116563103A (en) Remote sensing image space-time fusion method based on self-adaptive neural network
Cengiz et al. The Effect of Super Resolution Method on Classification Performance of Satellite Images
Wang et al. Using 250-m MODIS data for enhancing spatiotemporal fusion by sparse representation
CN113066030B (en) Multispectral image panchromatic sharpening method and system based on space-spectrum fusion network
CN114926335A (en) Video super-resolution method and system based on deep learning and computer equipment
CN113971763A (en) Small target segmentation method and device based on target detection and super-resolution reconstruction
CN117576483B (en) Multisource data fusion ground object classification method based on multiscale convolution self-encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant