CN115272084B - High-resolution image reconstruction method and device - Google Patents
High-resolution image reconstruction method and device Download PDFInfo
- Publication number
- CN115272084B CN115272084B CN202211179917.3A CN202211179917A CN115272084B CN 115272084 B CN115272084 B CN 115272084B CN 202211179917 A CN202211179917 A CN 202211179917A CN 115272084 B CN115272084 B CN 115272084B
- Authority
- CN
- China
- Prior art keywords
- image
- time
- phase
- resolution
- time phase
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000007704 transition Effects 0.000 claims abstract description 129
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 107
- 230000004927 fusion Effects 0.000 claims abstract description 21
- 238000005070 sampling Methods 0.000 claims description 83
- 238000012549 training Methods 0.000 claims description 52
- 230000006870 function Effects 0.000 claims description 51
- 238000000605 extraction Methods 0.000 claims description 41
- 238000013507 mapping Methods 0.000 claims description 36
- 238000012545 processing Methods 0.000 claims description 30
- 238000013528 artificial neural network Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 10
- 150000001875 compounds Chemical class 0.000 claims description 4
- 230000002265 prevention Effects 0.000 claims description 4
- 238000012937 correction Methods 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000002123 temporal effect Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000007500 overflow downdraw method Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
Abstract
The application provides a method and a device for reconstructing high resolution of an image, which comprises the following steps: acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, and the first time phase low-resolution image and the third time phase low-resolution image are paired high-resolution images in the same region and the same time phase; obtaining a transition residual image through a super-resolution convolutional neural network; extracting a convolutional neural network through the bias characteristics to obtain sensor deviation; and obtaining a second time-phase high-resolution reconstructed image through the time fusion convolutional neural network. The method is based on sensor error correction and space-time data fusion, and can effectively improve the high-resolution reconstruction visual effect of the image. In addition, an image high-resolution reconstruction device, equipment and a storage medium are also provided.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for reconstructing a high resolution image based on sensor error correction and spatio-temporal data fusion, an electronic device, and a storage medium.
Background
The remote sensing image time sequence has important application in the fields of agricultural remote sensing monitoring, agricultural rural informatization realization and the like, when the agricultural remote sensing monitoring or the informatization processing of the same agricultural area is carried out, the comprehensive analysis is usually carried out on the remote sensing satellite images which pass through the area at different times, however, the irreconcilable contradiction exists between the time resolution and the spatial resolution of the common remote sensing data. For example, although the spatial resolution of the Landsat data is high, the temporal resolution is relatively low, and is easily affected by cloud and rainy weather, and the like, so that effective data cannot be acquired in a critical period of crop detection, while the spatial resolution of the MODIS data is relatively low although the temporal resolution is high, so that a problem of mixed pixels exists in crop classification, and therefore, the landfilled data is not suitable for an area with a complex planting structure, broken landscape, and strong heterogeneity. The prior art expects to process the different remote sensing data by fusing spatio-temporal data from the satellite images of two sensors, sensor one with very high temporal resolution but coarse spatial resolution, and sensor two with very high spatial resolution but lower temporal resolution, the fused output of the spatio-temporal data being a composite image sequence of the temporal resolution of sensor one and the spatial resolution of sensor two.
However, the existing spatiotemporal fusion method has the following problems: (1) Existing spatio-temporal fusion methods typically reconstruct high-resolution images under the assumption that image changes can be directly transferred from one sensor to another, which does not take into account differences in the ability of different sensors to characterize changes, which can lead to spectral and spatial distortions in the reconstructed image; (2) The existing space-time fusion method generally adopts a linear weighting method to directly fuse time characteristics, the linear weighting method does not fully consider the change characteristics of all pixels, and the characterization capability of image characteristics is limited.
Therefore, a new high resolution image reconstruction method is needed to solve the above problems.
Disclosure of Invention
The application provides an image high-resolution reconstruction method and device, electronic equipment and a storage medium, and based on sensor error correction and space-time data fusion, the image high-resolution reconstruction visual effect can be effectively improved.
According to a first aspect, the present invention provides a high resolution image reconstruction method, comprising the following steps:
acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, the first time phase low-resolution image is provided with a first time phase high-resolution image which is paired in the same region and the same time phase, and the third time phase low-resolution image is provided with a third time phase high-resolution image which is paired in the same region and the same time phase;
inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image;
inputting the first two-time phase transition residual image into a bias feature extraction convolutional neural network to obtain a first two-time phase sensor deviation, and inputting the second three-time phase transition residual image into the bias feature extraction convolutional neural network to obtain a second three-time phase sensor deviation;
adding the first time-phase high-resolution image, the first two-time phase transition residual image and the first two-time phase sensor deviation to obtain a second time-phase forward transition high-resolution image, and adding the third time-phase high-resolution image, the second three-time phase transition residual image and the second three-time phase sensor deviation to obtain a second time-phase backward transition high-resolution image;
and inputting the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image into a time fusion convolution neural network to obtain a second time phase high-resolution reconstruction image.
Optionally, the inputting the first time-phase low-resolution image and the second time-phase low-resolution image into the super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and the inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image specifically include:
using a formula of a loss function
Training the super-resolution convolutional neural network, wherein,a mapping function representing a super-resolution convolutional neural network,a training weight parameter representing the mapping function,to train the heartTime-phase high-resolution imageAnd a first step ofTime-phase high-resolution imageThe difference of (a) to (b),to train the heartTime-phase low-resolution imageAnd a firstTime phase low resolutionImage forming methodThe difference of (a) to (b),whereinThe length of the euclidean model is expressed,is composed of、The resolution in the length of the image is,is composed of、Resolution over image width;
calculating the image difference between the first time-phase low-resolution image and the second time-phase low-resolution image, and inputting the image difference into a trained super-resolution convolutional neural network to obtain a first two-time-phase transition residual image;
and calculating the image difference between the second time-phase low-resolution image and the third time-phase low-resolution image, and inputting the image difference into the trained super-resolution convolutional neural network to obtain a second three-time-phase transition residual image.
Optionally, the extracting the first two-time phase transition residual image input offset feature into the convolutional neural network to obtain a first two-time phase sensor bias, and the extracting the second three-time phase transition residual image input offset feature into the convolutional neural network to obtain a second three-time phase sensor bias specifically includes:
using a loss function
Training the bias feature extraction convolutional neural network, wherein,a mapping function representing a bias feature extraction convolutional neural network,training weight parameters representing bias feature extraction convolutional neural network mapping functions,to train the heartTime-phase high-resolution imageAnd a firstTime-phase high-resolution imageThe difference of (a) to (b),is the ith, j time phase transition residual error image;
inputting the first two-time phase transition residual error image into the trained bias feature extraction convolutional neural network to obtain a first two-time phase sensor deviation;
and inputting the second three-time phase transition residual image into the trained bias feature extraction convolutional neural network to obtain a second three-time phase sensor deviation.
Optionally, the bias feature extraction convolutional neural network includes an input layer, three convolutional concealment layers, and an output layer, where the three convolutional concealment layers correspond to a feature extraction operation, a nonlinear mapping operation, and a reconstruction operation, respectively.
Optionally, the inputting the second time-phase forward transition high-resolution image and the second time-phase backward transition high-resolution image into the time fusion convolutional neural network to obtain the second time-phase high-resolution reconstructed image specifically includes:
using a Bicubic interpolation method to perform upsampling on the first time phase low-resolution image, the second time phase low-resolution image and the third time phase low-resolution image to obtain a first time phase upsampled image, a second time phase upsampled image and a third time phase upsampled image, wherein the upsampling proportion is the same as the sampling proportion of the image after high-resolution reconstruction;
subtracting the first time phase up-sampled image from the second time phase up-sampled image to obtain a first two-time phase up-sampled residual image, and subtracting the second time phase up-sampled image from the third time phase up-sampled image to obtain a second three-time phase up-sampled residual image;
using a loss function
Training the time-fused convolutional neural network, wherein,a mapping function representing a time-fused convolutional neural network,a training weight parameter representing the mapping function,representing the first in the training setThe time phase is,is a high-resolution image of the j-th phase,is the forward transition high resolution picture from the ith phase at the jth phase,is a backward-transitional high resolution image from the kth phase at the jth phase,sampling residual images for the ith, j-th phase,sampling residual image at j, k time phase,respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,is a picture structure similarity function, whereinRespectively represent imagesThe mean value of all the elements in (c),respectively representing imagesThe standard deviation of all the elements in (A),representing an imageThe covariance of the medium elements is determined,two very small constants, the prevention denominator is 0;
and inputting the second time phase forward transition high-resolution image, the second time phase backward transition high-resolution image, the first two-time phase up-sampling residual image and the two-three time phase up-sampling residual image into a trained time fusion convolution neural network to obtain a second time phase high-resolution reconstructed image.
Optionally, the time-fusion convolutional neural network includes an input layer, three convolutional concealment layers, and an output layer;
the input layer is used for stacking the first two-time phase up-sampling residual error image, the second three-time phase up-sampling residual error image, the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image together to form a four-channel data input model;
three hidden layers respectively correspond to the feature extraction operation, the nonlinear mapping operation and the weight extraction operation, and one hidden layer corresponding to the weight extraction outputsTensor (A)WhereinThe length and width resolution of the high-resolution image;
the output formula of the output layer is
In the formula (I), the compound is shown in the specification,for the second-phase high-resolution reconstructed image,for the second phase forward transition high resolution image,the high resolution image is backward-transitioned for the second phase,representing all elements being 1The number of tensors is such that,one for output of the previous layerTensor of ""indicates multiplication of elements at corresponding positions.
According to a second aspect, the present invention provides an image high resolution reconstruction apparatus, comprising:
the image acquisition module is used for acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, the first time phase low-resolution image is provided with a first time phase high-resolution image which is paired in the same region and the same time phase, and the third time phase low-resolution image is provided with a third time phase high-resolution image which is paired in the same region and the same time phase;
the transition residual image data processing module is used for inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image;
the sensor deviation data processing module is used for inputting the first two-time phase transition residual error image into a bias characteristic extraction convolutional neural network to obtain a first two-time phase sensor deviation, and inputting the second three-time phase transition residual error image into the bias characteristic extraction convolutional neural network to obtain a second three-time phase sensor deviation;
the transition high-resolution image data processing module is used for adding the first time-phase high-resolution image, the first two-time phase transition residual image and the first two-time phase sensor deviation to obtain a second time-phase forward transition high-resolution image, and adding the third time-phase high-resolution image, the second three-time phase transition residual image and the second three-time phase sensor deviation to obtain a second time-phase backward transition high-resolution image;
and the high-resolution reconstruction image data processing module is used for inputting the second time-phase forward transition high-resolution image and the second time-phase backward transition high-resolution image into a time fusion convolution neural network to obtain a second time-phase high-resolution reconstruction image.
Optionally, the high-resolution reconstructed image data processing module includes:
the up-sampling image data processing submodule is used for up-sampling the first time phase low-resolution image, the second time phase low-resolution image and the third time phase low-resolution image by using a Bicubic interpolation method to obtain a first time phase up-sampling image, a second time phase up-sampling image and a third time phase up-sampling image, wherein the up-sampling proportion is the same as the sampling proportion of the image after high-resolution reconstruction;
the up-sampling residual image data processing sub-module is used for subtracting the first time phase up-sampling image from the second time phase up-sampling image to obtain a first two-time phase up-sampling residual image, and subtracting the second time phase up-sampling image from the third time phase up-sampling image to obtain a second three-time phase up-sampling residual image;
a time-fused convolutional neural network training submodule for using the loss function
Training the time-fused convolutional neural network, wherein,a mapping function representing a time-fused convolutional neural network,a training weight parameter representing the mapping function,representing the first in the training setThe time phase is,is a high-resolution image of the j-th phase,is the forward transition high resolution picture from the ith phase at the jth phase,is a backward-transitional high-resolution image from the kth phase at the jth phase,sampling residual images for the ith, j-th phase,is jthThe residual image is sampled at the time phase k,respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,is a picture structure similarity function, whereinRespectively representing imagesThe mean value of all the elements in (c),respectively representing imagesThe standard deviation of all the elements in (A),representing an imageThe covariance of the medium elements is determined,two very small constants, the prevent denominator is 0;
and the high-resolution reconstructed image data output submodule is used for inputting the second time phase forward transition high-resolution image, the second time phase backward transition high-resolution image, the first two-time phase up-sampling residual image and the two-three time phase up-sampling residual image into the trained time fusion convolution neural network to obtain a second time phase high-resolution reconstructed image.
According to a third aspect, the invention provides an electronic device comprising: a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement a high resolution image reconstruction method according to the first aspect.
According to a fourth aspect, the present invention provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the method for high resolution reconstruction of images according to the first aspect.
The invention has the beneficial effects that:
the invention fully considers the difference of different sensors, learns the error of the two sensors by training the bias characteristic convolution neural network, and improves the high-resolution reconstruction precision of the image. The invention also fully considers that the image to be reconstructed is a time sequence data, and the influence of different time length changes on the reconstructed date image is different, on one hand, the influence degree of the image difference caused by the time change at each pixel point on the reconstructed date image is learned through training the time fusion convolution neural network, and the influence degree is taken as the time fusion weight of the pixel point, so that the weight expression force is stronger and more accurate than that of the traditional method which directly uses the residual image value and uses a linear function to calculate the weight; on the other hand, an SSIM structure similarity function is added in the model loss function besides the classical MSE loss function, the closer the SSIM value is to 1, the higher the similarity of the two image structures is, the less the least square loss of the optimized model is ensured, the better structure similarity with the real picture is kept, and the better visual effect is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a flow chart of a method for high resolution image reconstruction according to an embodiment;
FIG. 2 is a block diagram of a logical framework of a high resolution image reconstruction method according to an embodiment;
FIG. 3 is a flow diagram of obtaining a high resolution reconstructed image through a time-fused convolutional neural network in one embodiment;
FIG. 4 is a diagram illustrating an actual predicted effect of a high resolution image reconstruction method according to an embodiment;
fig. 5 is a diagram illustrating an architecture of an image high resolution reconstruction device according to another embodiment.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is noted that the terms "comprises," "comprising," and "having" and any variations thereof in the description and claims of this application and the drawings described above are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. In the claims, the description and the drawings of the specification of the present application, relational terms such as "first" and "second", and the like, may be used solely to distinguish one entity/action/object from another entity/action/object without necessarily requiring or implying any actual such relationship or order between such entities/actions/objects.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
Fig. 1 is a flowchart of a high resolution image reconstruction method according to an embodiment of the present disclosure. Fig. 2 is a schematic diagram of a logical framework of a high resolution image reconstruction method according to an embodiment of the present disclosure.
Referring to fig. 1 and fig. 2, in an embodiment, a method for reconstructing an image with high resolution is provided, including:
step 100, obtaining a first time phase low resolution imageSecond time-phase low-resolution imageAnd a third time-phase low-resolution imageThe first time phase, the second time phase and the third time phase are arranged in sequence according to a time sequence, wherein the first time phase is a low-resolution imageFirst time phase high resolution image with same time phase in same paired regionThird time-phase low-resolution imageThird time-phase high-resolution image with same time phase of same region and matched pair。
It should be noted that, as an example, in the embodiment of the present invention, a remote sensing image of two sensors for a specific planting structure area is obtained first, where an image of a sensor i is a high resolution image, and an image sequence isThe image of the second sensor is the low resolution image sequence matched with the second sensorThe paired high-low resolution image sequences form a training sample set, and the training sample set is used for training the neural network designed by the invention.
In addition to the low resolution images with the paired high resolution images, there are still a large number of high resolution images without the paired high resolution images in the low resolution image sequence, and the method of the present invention will reconstruct the paired high resolution images for them. For clarity and brief description of the embodiments of the present invention, in the following embodiments, the reconstruction portion is only three low resolution image sequences with different phasesAnd two high-resolution image sequences pairedTo reconstruct andpaired high spatial resolution imageryThe method for reconstructing the high resolution image based on the sensor error correction and the spatio-temporal data fusion is described by way of example. By analogy, the unpaired high spatial resolution image in the low resolution image sequence can be reconstructed, so that high time and high spatial resolution images can be obtainedA sequence of spatially resolved images. In order to understand the consistency of the meanings of various physical quantities, the same parametric representation is used for the training set parametric representation in the training phase of the neural network in each step and for the parametric representation in the steps of prediction, recognition and reconstruction based on the neural network. This is a brief way of understanding for a person skilled in the art.
Step 200, the first time phase low resolution image is processedAnd a second time-phase low-resolution imageInputting the super-resolution convolutional neural network to obtain a first and second time phase transition residual imageSecond time phase low resolution imageAnd a third time-phase low-resolution imageInputting the super-resolution convolutional neural network to obtain a second three-time phase transition residual image。
In one embodiment, step 200 further includes:
step 201, using a loss function formula
Training a super-resolution convolutional neural network, wherein,a mapping function representing a super-resolution convolutional neural network,a training weight parameter representing the mapping function,to train the heartTime-phase high-resolution imageAnd a firstTime-phase high-resolution imageThe difference of (a) to (b),to train the heartTime-phase low-resolution imageAnd a firstTime-phase low-resolution imageThe difference of (a) to (b),in whichThe length of the euclidean model is expressed,is composed of、The resolution in the length of the image is,is composed of、Resolution over the image width;
step 202, calculate a first time-phase low-resolution imageAnd a second time-phase low-resolution imageDifference in image ofDifferentiating the imageInputting the trained super-resolution convolution neural network to obtain a first and second time phase transition residual imageIn the formulaRepresenting the weight parameters of the trained super-resolution convolutional neural network; computing a second time-phase low-resolution imageAnd a third time-phase low-resolution imageDifference in image ofThe image is differedInputting the trained super-resolution convolutional neural network to obtain a second three-phase transition residual image。
It should be noted that the Super-Resolution Convolutional Neural Network (SRCNN) adopted in the present embodiment belongs to the prior art, and is described in detail in document Dong, c., et al.
In one embodiment, step 300 further includes:
step 301, using a loss function
Training bias features to extract a convolutional neural network, where,a mapping function representing a bias feature extraction convolutional neural network,training weight parameters representing bias feature extraction convolutional neural network mapping functions,to train the firstTime-phase high-resolution imageAnd a first step ofTime-phase high-resolution imageThe difference of (a) to (b),is the ith, j time phase transition residual error image;
step 302, convert the first two-phase transition residual image into a second two-phase transition residual imageInputting the trained bias characteristics to extract the convolutional neural network to obtain the first two-time phase sensor deviationIn the formula (I), wherein,representing the trained bias characteristics to extract the weight parameters of the convolutional neural network; the second three-time phase transition residual error imageInputting the trained bias characteristics to extract the convolutional neural network to obtain the second three-time phase sensor deviation。
It should be noted that, in the training process of the convolutional neural network for extracting bias features,and calculating by using the trained super-resolution convolutional neural network according to the steps 200 and 202.
In one embodiment, the biased feature extraction convolutional neural network comprises an input layer, three convolutional concealment layers and an output layer, the three convolutional concealment layers corresponding to an extracted feature operation, a nonlinear mapping operation and a reconstruction operation, respectively.
Step 400, the first time phase high resolution image is processedFirst and second time phase transition residual imageAnd first two time phase sensor offsetAdding to obtain a second time-phase forward transition high-resolution imageThe third time-phase high-resolution imageSecond three-time phase transition residual imageAnd a second three-phase sensor offsetAdding to obtain a second time-phase backward transition high-resolution image。
Step 500, forward-transition the second time phase to the high resolution imageAnd second phase backward transition high resolution videoInputting the time fusion convolution neural network to obtain a second time-phase high-resolution reconstructed image。
Fig. 2 is a schematic diagram of a logical framework in an image high resolution reconstruction method according to an embodiment of the present disclosure.
Referring to fig. 3, in an embodiment, step 500 further includes:
step 501, using Bicubic interpolation method to perform first time phase low resolution imageSecond time-phase low-resolution imageAnd a third time-phase low-resolution imagePerforming upsampling to obtain a first time phase upsampled imageSampling the image in the second time phaseAnd third phase upsampled imagesWherein the up-sampling proportion is the same as the sampling proportion of the image after the high-resolution reconstruction;
step 502, up-sampling the first time phase to obtain an imageAnd a second time phase up-sampled imageMaking a difference to obtain a first two-time phase up-sampling residual imageThe second time phase is up-sampled to obtain an imageAnd a third time phase up-sampling the imageMaking a difference to obtain a second three-time phase up-sampling residual image ;
Step 503, using the loss function
Training a time-fused convolutional neural network, where,a mapping function representing a time-fused convolutional neural network,a training weight parameter representing the mapping function,representing the first in the training setThe time phase is,is a j-th phase high-resolution image,is the forward transition high resolution picture from the ith phase at the jth phase,is a backward-transitional high resolution image from the kth phase at the jth phase,sampling residual images for the ith, j-th phase,sampling residual image at j, k time phase,the i, j, k time phase up-sampled images are obtained by up-sampling the i, j, k time phase low resolution images,is a picture structure similarity function, whereinRespectively representing imagesThe mean value of all the elements in (c),respectively representing imagesThe standard deviation of all the elements in (A),representing an imageThe covariance of the medium elements is determined,two very small constants, the prevention denominator is 0;
step 504, forward transition the second time phase to the high resolution imageSecond time phase backward transition high resolution imageFirst two-time phase up-sampling residual imageAnd two-three time phase up-sampling residual imageInputting the trained time fusion convolution neural network to obtain a second time-phase high-resolution reconstructed image。
To say thatIt is clear that, in the training process of the time-fused convolutional neural network,、calculated according to step 400.
It should be noted that, in the above description of the high-resolution image reconstruction method, only one, two, and three time phases are used, but in the training of three convolutional neural networks, any three images in the image set may be combined into a combination of three time phases, i, j, and k correspond to the one, two, and three time phases, respectively.
In one embodiment, a time-fused convolutional neural network comprises an input layer, three convolutional hidden layers, and an output layer;
the input layer is used for stacking the first two-time phase up-sampling residual image, the second three-time phase up-sampling residual image, the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image together to form a four-channel data input model;
the three hidden layers respectively correspond to the feature extraction operation, the nonlinear mapping operation and the weight extraction operation, and one hidden layer corresponding to the weight extraction is outputTensor ofWhereinHigh resolution image length and width resolution;
the output layer has the output formula of
In the formula (I), the compound is shown in the specification,for the second-phase high-resolution reconstructed image,for the second phase forward transition high resolution image,the high resolution image is backward-transitioned for the second phase,representing all elements being 1The number of tensors is such that,for one output of the previous layerTensor of (a)'"indicates multiplication of elements at corresponding positions.
Refer to FIG. 4, whereinFor low-resolution image sequences acquired from two different sensorsAnd high resolution image sequenceReconstructed andpaired high resolution imagery which ensures both high resolution and no-imageThe image sequences have the same resolution, and the change information of the low-resolution time sequence images along with time is kept; by analogy, the unpaired high spatial resolution image in the low resolution image sequence can be reconstructed, so that the high temporal and high spatial resolution image sequence can be obtained.
Referring to fig. 5, in an embodiment, an apparatus for reconstructing a high resolution image is provided, including:
an image acquisition module for acquiring a first time phase low resolution imageSecond time-phase low-resolution imageAnd a third time-phase low-resolution imageThe first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, wherein the first time phase has a low resolutionFirst time phase high resolution image with same time phase of same region and same pairThird-phase low-resolution imageThird time-phase high-resolution image with same time phase of same region and matched pair;
A transition residual image data processing module for processing the first time phase low resolution imageAnd a second time-phase low-resolution imageInputting the super-resolution convolutional neural network to obtain a first and second time phase transition residual imageAnd applying the second time phase low resolution imageAnd third-phase low-resolution imagesInputting the super-resolution convolutional neural network to obtain a second three-time phase transition residual image;
A sensor deviation data processing module for processing the first and second time phase transition residual imagesThe input bias characteristic is extracted from the convolutional neural network to obtain the first and second time phase sensor deviationAnd the second three-time phase transition residual imageThe input bias characteristic is extracted from the convolutional neural network to obtain the second three-time phase sensor deviation;
A transitional high resolution image data processing module for processing the first time phase high resolution imageFirst and second time phase transition residual imageAnd first two time phase sensor offsetAdding to obtain a second time-phase forward transition high-resolution imageAnd applying the third time-phase high resolution imageAnd the second three-time phase transition residual imageAnd a second three-phase sensor offsetAdding to obtain a second phase backward transition high-resolution image;
A high-resolution reconstruction image data processing module for forward-transiting the second phase to the high-resolution imageAnd second phase backward transition high resolution videoInputting the time fusion convolution neural network to obtain a second time-phase high-resolution reconstructed image。
In one embodiment, the transition residual image data processing module is configured to implement:
using a formula for a loss function
Training a super-resolution convolutional neural network, wherein,a mapping function representing a super-resolution convolutional neural network,a training weight parameter representing the mapping function,to train the heartTime-phase high-resolution imageAnd a firstTime-phase high-resolution imageThe difference of (a) to (b),to train the firstTime-phase low-resolution imageAnd a firstTime-phase low-resolution imageThe difference of (a) to (b),whereinThe length of the euclidean model is expressed,is composed of、The resolution in the length of the image is,is composed of、Resolution over image width;
computing a first time-phase low-resolution imageAnd a second time-phase low-resolution imageDifference in image ofThe image is differedInputting the trained super-resolution convolution neural network to obtain a first and second time phase transition residual imageIn the formulaRepresenting the weight parameters of the trained super-resolution convolutional neural network; computing a second time-phase low resolution imageAnd a third time-phase low-resolution imageDifference in image ofDifferentiating the imageInputting the trained super-resolution convolutional neural network to obtain a second three-phase transition residual image。
In one embodiment, the sensor deviation data processing module is configured to implement:
using a loss function
Training the bias features to extract a convolutional neural network, where,a mapping function representing a bias feature extraction convolutional neural network,training weight parameters representing bias feature extraction convolutional neural network mapping functions,to train the firstTime-phase high-resolution imageAnd a first step ofTime-phase high-resolution imageThe difference of (a) to (b),is the ith, j time phase transition residual error image;
the first two-time phase transition residual imageInputting the trained bias characteristics to extract the convolutional neural network to obtain the first two-time phase sensor deviationIn the formula (I), wherein,representing the trained bias characteristics to extract the weight parameters of the convolutional neural network; the second three-time phase transition residual error imageInputting the trained bias characteristics to extract the convolutional neural network to obtain the second three-time phase sensor deviation。
It should be noted that, in the training of the bias feature extraction convolutional neural network,and calculating by using the trained super-resolution convolutional neural network from the training set data.
In one embodiment, the biased feature extraction convolutional neural network comprises an input layer, three convolutional concealment layers and an output layer, the three convolutional concealment layers corresponding to an extracted feature operation, a nonlinear mapping operation and a reconstruction operation, respectively.
In one embodiment, the high resolution reconstructed image data processing module comprises:
an up-sampling image data processing sub-module for processing the first time phase low resolution image by Bicubic interpolationSecond time phase low resolution imageAnd a third time-phase low-resolution imagePerforming upsampling to obtain a first time phase upsampled imageSampling the image in the second time phaseAnd a third time phase up-sampled imageWherein the up-sampling proportion is the same as the sampling proportion of the image after the high-resolution reconstruction;
an up-sampling residual image data processing sub-module for up-sampling the first time phaseAnd a second time phase up-sampled imageMaking a difference to obtain a first two-time phase up-sampling residual imageAnd upsampling the second time phase to obtain an imageAnd a third time phase up-sampling the imageMaking a difference to obtain a second three-time phase up-sampling residual image;
A time-fused convolutional neural network training submodule for using the loss function
Training a time-fused convolutional neural network, where,a mapping function representing a time-fused convolutional neural network,a training weight parameter representing the mapping function,representing the first in the training setThe time phase is,is a j-th phase high-resolution image,is the forward transition high resolution picture from the ith phase at the jth phase,is a backward-transitional high resolution image from the kth phase at the jth phase,sampling residual images for the ith, j-th phase,sampling residual image at j, k time phase,respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,is a picture structure similarity function, whereinRespectively representing imagesThe mean value of all the elements in (c),respectively representing imagesThe standard deviation of all the elements in (A),representing an imageMiddle yuanThe covariance of the elements is determined by the covariance,two very small constants, the prevention denominator is 0;
a high-resolution reconstruction image data output submodule for forward-transiting the second phase to the high-resolution imageSecond time phase backward transition high resolution imageFirst two-time phase up-sampling residual imageAnd two-three time phase up-sampling residual imageInputting the trained time fusion convolution neural network to obtain a second time-phase high-resolution reconstructed image。
It should be noted that, in the training of the time-fused convolutional neural network,、the transition high-resolution image data processing module processes the training set data to calculate.
In the above description of the high-resolution image reconstruction apparatus, only one, two, and three time phases are used, but in the training of three convolutional neural networks, any three images in the set of images may be combined into a combination of three time phases, i, j, and k correspond to the one, two, and three time phases, respectively.
In one embodiment, a time-fused convolutional neural network includes an input layer, three convolutional concealment layers, and an output layer;
the input layer is used for stacking the first two-time phase up-sampling residual image, the second three-time phase up-sampling residual image, the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image together to form a four-channel data input model;
the three hidden layers respectively correspond to the feature extraction operation, the nonlinear mapping operation and the weight extraction operation, and one hidden layer corresponding to the weight extraction is outputTensor ofIn whichHigh resolution image length and width resolution;
the output layer has the output formula of
In the formula (I), the compound is shown in the specification,for the second-phase high-resolution reconstructed image,for the second phase forward transition high resolution image,the high resolution image is backward-transitioned for the second phase,representing all elements being 1The number of tensors is such that,one for output of the previous layerTensor of ""represents multiplication of elements at corresponding positions.
In an embodiment, an electronic device is proposed, which comprises a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute one of the steps of the image high resolution reconstruction method of the above-mentioned embodiments.
In one embodiment, a computer-readable storage medium is provided, which stores a computer program, and when the computer program is executed by a processor, the processor executes the steps of the image high resolution reconstruction method in the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a non-volatile computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method for reconstructing an image with high resolution is characterized by comprising the following steps:
acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, the first time phase low-resolution image is provided with a first time phase high-resolution image which is paired in the same region and the same time phase, and the third time phase low-resolution image is provided with a third time phase high-resolution image which is paired in the same region and the same time phase;
inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image;
inputting the first two-time phase transition residual image into a bias characteristic extraction convolutional neural network to obtain a first two-time phase sensor deviation, and inputting the second three-time phase transition residual image into a bias characteristic extraction convolutional neural network to obtain a second three-time phase sensor deviation;
adding the first time phase high-resolution image, the first two-time phase transition residual image and the first two-time phase sensor deviation to obtain a second time phase forward transition high-resolution image, and adding the third time phase high-resolution image, the second three-time phase transition residual image and the second three-time phase sensor deviation to obtain a second time phase backward transition high-resolution image;
and inputting the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image into a time fusion convolution neural network to obtain a second time phase high-resolution reconstruction image.
2. The method according to claim 1, wherein the inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-phase transition residual image, and the inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-phase transition residual image specifically comprises:
using a formula of a loss function
Training the super-resolution convolutional neural network, wherein,a mapping function representing a super-resolution convolutional neural network,a training weight parameter representing the mapping function,to train the heartTime-phase high-resolution imageAnd a firstTime-phase high-resolution imageThe difference of (a) to (b),to train the heartTime-phase low-resolution imageAnd a first step ofTime-phase low-resolution imageThe difference of (a) to (b),whereinThe length of the euclidean model is expressed,is composed of、The resolution in the length of the image is,is composed of、Resolution over image width;
calculating the image difference between the first time-phase low-resolution image and the second time-phase low-resolution image, and inputting the image difference into a trained super-resolution convolutional neural network to obtain a first two-time-phase transition residual image;
and calculating the image difference between the second time-phase low-resolution image and the third time-phase low-resolution image, and inputting the image difference into the trained super-resolution convolutional neural network to obtain a second three-time-phase transition residual image.
3. The method as claimed in claim 1, wherein the extracting the bias features from the first two-phase transition residual image input is performed by a convolutional neural network to obtain a first two-phase sensor bias, and the extracting the bias features from the second three-phase transition residual image input is performed by a convolutional neural network to obtain a second three-phase sensor bias:
using a loss function
Training the bias feature extraction convolutional neural network, wherein,a mapping function representing a bias feature extraction convolutional neural network,representing the training weight parameters of the bias feature extraction convolutional neural network mapping function,to train the heartTime-phase high-resolution imageAnd a firstTime-phase high-resolution imageThe difference of (a) to (b),is the ith, j time phase transition residual image;
inputting the first two-time phase transition residual error image into the trained bias feature extraction convolutional neural network to obtain a first two-time phase sensor deviation;
and inputting the second three-time phase transition residual image into the trained bias feature extraction convolutional neural network to obtain a second three-time phase sensor deviation.
4. The method of claim 3, wherein the biased feature extraction convolutional neural network comprises an input layer, three convolutional concealment layers and an output layer, and the three convolutional concealment layers correspond to the feature extraction operation, the nonlinear mapping operation and the reconstruction operation, respectively.
5. The method as claimed in claim 1, wherein the inputting the second-phase forward transition high-resolution image and the second-phase backward transition high-resolution image into a time fusion convolutional neural network to obtain the second-phase high-resolution reconstructed image specifically includes:
using a Bicubic interpolation method to perform upsampling on the first time phase low-resolution image, the second time phase low-resolution image and the third time phase low-resolution image to obtain a first time phase upsampled image, a second time phase upsampled image and a third time phase upsampled image, wherein the upsampling proportion is the same as the sampling proportion of the image after high-resolution reconstruction;
subtracting the first time phase up-sampled image from the second time phase up-sampled image to obtain a first two-time phase up-sampled residual image, and subtracting the second time phase up-sampled image from the third time phase up-sampled image to obtain a second three-time phase up-sampled residual image;
using a loss function
Training the time-fused convolutional neural network, wherein,a mapping function representing a time-fused convolutional neural network,a training weight parameter representing the mapping function,representing the first in the training setThe time phase is,is a j-th phase high-resolution image,is the forward transition high resolution picture from the ith phase at the jth phase,is a backward-transitional high resolution image from the kth phase at the jth phase,sampling residual images for the ith, j-th phase,sampling residual image at j, k time phase,respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,is a picture structure similarity function, whereinRespectively represent imagesThe average value of all the elements in (1),respectively represent imagesThe standard deviation of all the elements in (a),representing an imageThe covariance of the medium elements is determined,two very small constants, the prevent denominator is 0;
and inputting the second time phase forward transition high-resolution image, the second time phase backward transition high-resolution image, the first two-time phase up-sampling residual image and the two-three time phase up-sampling residual image into a trained time fusion convolution neural network to obtain a second time phase high-resolution reconstructed image.
6. The method according to claim 5, wherein the time-fused convolutional neural network comprises an input layer, three convolutional concealment layers and an output layer;
the input layer is used for stacking the first two-time phase up-sampling residual image, the second three-time phase up-sampling residual image, the second time phase forward transition high-resolution image and the second time phase backward transition high-resolution image together to form a four-channel data input model;
three hidden layers respectively correspond to the feature extraction operation, the nonlinear mapping operation and the weight extraction operation, and one hidden layer corresponding to the weight extraction outputsTensor ofIn whichThe length and width resolution of the high-resolution image;
the output formula of the output layer is
In the formula (I), the compound is shown in the specification,for the second-phase high-resolution reconstructed image,for the second phase forward transition high resolution image,the second phase backward transition high resolution image,representing all elements being 1The number of tensors is such that,for one output of the previous layerTensor of ""represents multiplication of elements at corresponding positions.
7. An apparatus for high resolution reconstruction of an image, comprising:
the image acquisition module is used for acquiring a first time phase low-resolution image, a second time phase low-resolution image and a third time phase low-resolution image, wherein the first time phase, the second time phase and the third time phase are sequentially arranged according to a time sequence, the first time phase low-resolution image is provided with a first time phase high-resolution image which is paired in the same region and the same time phase, and the third time phase low-resolution image is provided with a third time phase high-resolution image which is paired in the same region and the same time phase;
the transition residual image data processing module is used for inputting the first time-phase low-resolution image and the second time-phase low-resolution image into a super-resolution convolutional neural network to obtain a first two-time-phase transition residual image, and inputting the second time-phase low-resolution image and the third time-phase low-resolution image into the super-resolution convolutional neural network to obtain a second three-time-phase transition residual image;
the sensor deviation data processing module is used for inputting the first two-time phase transition residual error image into a bias feature extraction convolutional neural network to obtain a first two-time phase sensor deviation, and inputting the second three-time phase transition residual error image into the bias feature extraction convolutional neural network to obtain a second three-time phase sensor deviation;
the transition high-resolution image data processing module is used for adding the first time-phase high-resolution image, the first two-time phase transition residual image and the first two-time phase sensor deviation to obtain a second time-phase forward transition high-resolution image, and adding the third time-phase high-resolution image, the second three-time phase transition residual image and the second three-time phase sensor deviation to obtain a second time-phase backward transition high-resolution image;
and the high-resolution reconstruction image data processing module is used for inputting the second time-phase forward transition high-resolution image and the second time-phase backward transition high-resolution image into a time fusion convolution neural network to obtain a second time-phase high-resolution reconstruction image.
8. The apparatus for high-resolution reconstruction of images according to claim 7, wherein said high-resolution reconstructed image data processing module comprises:
the up-sampling image data processing submodule is used for up-sampling the first time phase low-resolution image, the second time phase low-resolution image and the third time phase low-resolution image by using a Bicubic interpolation method to obtain a first time phase up-sampling image, a second time phase up-sampling image and a third time phase up-sampling image, wherein the up-sampling proportion is the same as the sampling proportion of the image after high-resolution reconstruction;
the up-sampling residual image data processing sub-module is used for subtracting the first time phase up-sampling image from the second time phase up-sampling image to obtain a first two-time phase up-sampling residual image, and subtracting the second time phase up-sampling image from the third time phase up-sampling image to obtain a second three-time phase up-sampling residual image;
a time-fused convolutional neural network training submodule for using the loss function
Training the time-fused convolutional neural network, wherein,a mapping function representing a time-fused convolutional neural network,a training weight parameter representing the mapping function,representing the first in the training setThe time phase is,is a high-resolution image of the j-th phase,is the forward transition high resolution picture from the ith phase at the jth phase,is a backward-transitional high resolution image from the kth phase at the jth phase,sampling residual images for the ith, j-th phase,sampling residual image at j, k time phase,respectively the i, j, k time phase up-sampled image obtained by up-sampling the i, j, k time phase low resolution image,is a picture structure similarity function, whereinRespectively representing imagesThe average value of all the elements in (1),respectively representing imagesThe standard deviation of all the elements in (a),representing an imageThe covariance of the medium elements is determined,two very small constants, the prevention denominator is 0;
and the high-resolution reconstructed image data output submodule is used for inputting the second time phase forward transition high-resolution image, the second time phase backward transition high-resolution image, the first two-time phase up-sampling residual image and the two-three time phase up-sampling residual image into the trained time fusion convolution neural network to obtain a second time phase high-resolution reconstructed image.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor executing the program to implement a high resolution reconstruction method of images as claimed in any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, the program being executable by a processor for implementing a method for high resolution reconstruction of images as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211179917.3A CN115272084B (en) | 2022-09-27 | 2022-09-27 | High-resolution image reconstruction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211179917.3A CN115272084B (en) | 2022-09-27 | 2022-09-27 | High-resolution image reconstruction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115272084A CN115272084A (en) | 2022-11-01 |
CN115272084B true CN115272084B (en) | 2022-12-16 |
Family
ID=83756820
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211179917.3A Active CN115272084B (en) | 2022-09-27 | 2022-09-27 | High-resolution image reconstruction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272084B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727197A (en) * | 2019-01-03 | 2019-05-07 | 云南大学 | A kind of medical image super resolution ratio reconstruction method |
CN110164148A (en) * | 2019-05-28 | 2019-08-23 | 成都信息工程大学 | A kind of urban road crossing traffic lights intelligently matches period control method and control system |
CN111754403A (en) * | 2020-06-15 | 2020-10-09 | 南京邮电大学 | Image super-resolution reconstruction method based on residual learning |
CN111932457A (en) * | 2020-08-06 | 2020-11-13 | 北方工业大学 | High-space-time fusion processing algorithm and device for remote sensing image |
CN114841856A (en) * | 2022-03-07 | 2022-08-02 | 中国矿业大学 | Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention |
-
2022
- 2022-09-27 CN CN202211179917.3A patent/CN115272084B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727197A (en) * | 2019-01-03 | 2019-05-07 | 云南大学 | A kind of medical image super resolution ratio reconstruction method |
CN110164148A (en) * | 2019-05-28 | 2019-08-23 | 成都信息工程大学 | A kind of urban road crossing traffic lights intelligently matches period control method and control system |
CN111754403A (en) * | 2020-06-15 | 2020-10-09 | 南京邮电大学 | Image super-resolution reconstruction method based on residual learning |
CN111932457A (en) * | 2020-08-06 | 2020-11-13 | 北方工业大学 | High-space-time fusion processing algorithm and device for remote sensing image |
CN114841856A (en) * | 2022-03-07 | 2022-08-02 | 中国矿业大学 | Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention |
Non-Patent Citations (3)
Title |
---|
Image Super-Resolution with Non-Local Sparse Attention;Yiqun Mei .etc;《Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition》;20211102;3517-3526 * |
基于改进残差亚像素卷积神经网络的超分辨率图像重建方法研究;李岚等;《长春师范大学学报》;20200820(第08期);23-29 * |
基于深度学习与超分辨率重建的遥感高时空融合方法;张永梅等;《计算机工程与科学》;20200915(第09期);1578-1586 * |
Also Published As
Publication number | Publication date |
---|---|
CN115272084A (en) | 2022-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110969577B (en) | Video super-resolution reconstruction method based on deep double attention network | |
González-Audícana et al. | A low computational-cost method to fuse IKONOS images using the spectral response function of its sensors | |
Xu et al. | HAM-MFN: Hyperspectral and multispectral image multiscale fusion network with RAP loss | |
Li et al. | Sparse-based reconstruction of missing information in remote sensing images from spectral/temporal complementary information | |
CN110415199B (en) | Multispectral remote sensing image fusion method and device based on residual learning | |
Wang et al. | Enhanced deep blind hyperspectral image fusion | |
CN115511767B (en) | Self-supervised learning multi-modal image fusion method and application thereof | |
Dou et al. | Medical image super-resolution via minimum error regression model selection using random forest | |
Yang et al. | Image super-resolution reconstruction based on improved Dirac residual network | |
Nercessian et al. | Multiresolution decomposition schemes using the parameterized logarithmic image processing model with application to image fusion | |
CN113408540B (en) | Synthetic aperture radar image overlap area extraction method and storage medium | |
Wang et al. | Medical image super-resolution analysis with sparse representation | |
CN115272084B (en) | High-resolution image reconstruction method and device | |
Lei et al. | Convolution neural network with edge structure loss for spatiotemporal remote sensing image fusion | |
CN111950496B (en) | Mask person identity recognition method | |
Yang et al. | Fast multisensor infrared image super-resolution scheme with multiple regression models | |
CN110689510B (en) | Sparse representation-based image fusion method introducing dictionary information | |
Yang et al. | Multi-semi-couple super-resolution method for edge computing | |
CN116563103A (en) | Remote sensing image space-time fusion method based on self-adaptive neural network | |
Cengiz et al. | The Effect of Super Resolution Method on Classification Performance of Satellite Images | |
Wang et al. | Using 250-m MODIS data for enhancing spatiotemporal fusion by sparse representation | |
CN113066030B (en) | Multispectral image panchromatic sharpening method and system based on space-spectrum fusion network | |
CN114926335A (en) | Video super-resolution method and system based on deep learning and computer equipment | |
CN113971763A (en) | Small target segmentation method and device based on target detection and super-resolution reconstruction | |
CN117576483B (en) | Multisource data fusion ground object classification method based on multiscale convolution self-encoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |