CN114972043B - Image super-resolution reconstruction method and system based on combined trilateral feature filtering - Google Patents

Image super-resolution reconstruction method and system based on combined trilateral feature filtering Download PDF

Info

Publication number
CN114972043B
CN114972043B CN202210924242.4A CN202210924242A CN114972043B CN 114972043 B CN114972043 B CN 114972043B CN 202210924242 A CN202210924242 A CN 202210924242A CN 114972043 B CN114972043 B CN 114972043B
Authority
CN
China
Prior art keywords
image
resolution
feature
gradient
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210924242.4A
Other languages
Chinese (zh)
Other versions
CN114972043A (en
Inventor
左一帆
徐雅萍
谢家城
姜文晖
方玉明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi University of Finance and Economics
Original Assignee
Jiangxi University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi University of Finance and Economics filed Critical Jiangxi University of Finance and Economics
Priority to CN202210924242.4A priority Critical patent/CN114972043B/en
Publication of CN114972043A publication Critical patent/CN114972043A/en
Application granted granted Critical
Publication of CN114972043B publication Critical patent/CN114972043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image super-resolution reconstruction method and system based on combined trilateral feature filtering, wherein the method comprises the following steps: downsampling the high-resolution image to obtain a low-resolution image, and constructing a single-image super-resolution reconstruction model; extracting and obtaining multi-level image features through a feature thinning module in the image reconstruction branch; extracting and obtaining multi-level gradient characteristics through a gradient refining module in the gradient prediction branch; based on the image reconstruction branch and the gradient prediction branch, fusion guidance is carried out through a combined trilateral feature filtering module so as to realize self-adaptive adjustment of a convolution kernel of a target domain, and reconstruction of a high-resolution image is carried out from coarse to fine; and when the single image super-resolution reconstruction model is iterated to be converged, carrying out forward inference on the single image super-resolution reconstruction model to finally obtain a super-resolution reconstructed image. The invention deeply excavates the mutual guidance between the gradient domain and the pixel domain, and finally realizes good reconstruction effect.

Description

Image super-resolution reconstruction method and system based on combined trilateral feature filtering
Technical Field
The invention relates to the technical field of computer image processing, in particular to an image super-resolution reconstruction method and system based on combined trilateral feature filtering.
Background
The image super-resolution reconstruction means giving a point low-resolution image and then restoring a corresponding high-resolution image, and belongs to the problem of low-level computer vision. The image super-resolution reconstruction technology can significantly improve the performance of other advanced vision tasks, such as classification and identification.
At present, the image super-resolution reconstruction technology makes great progress, and particularly, with the rise of a deep convolutional neural network, a relevant method learns corresponding prior information through an end-to-end model characterized by hierarchical features. In early studies in this direction, researchers were working to improve model performance by designing specific network architectures. For example: residual learning and dense concatenation, etc. These deep convolutional neural network based methods implicitly learn the mapping function between the low resolution input and the high resolution output by means of black box training. Therefore, the designed model guarantees the orderly training while pursuing the high characterization capability of deeper networks.
Although deep networks trained by means of black boxes can significantly improve performance, this model has two drawbacks: (1), the first drawback is the lack of explanatory model; (2) The second drawback is the lack of generalization of the model, the parameters in the test phase are fixed and independent of the input content. To address the first drawback, researchers embed a priori into the network, such as a feature pyramid, discrete cosine transform. However, this solution is not complete. In particular, the design trend of modern networks requires efficient and effective fusion of multi-scale features in the same domain. Furthermore, cross-domain feature fusion is also necessary under the multitask learning framework. In these methods, feature fusion is usually achieved by simple channel merging, followed by a convolutional layer. Although sharing convolution weights at spatial locations with translational invariance can improve feature extraction, this simple and undirected approach still fails to model content-dependent feature fusion. For the second drawback, existing approaches mostly improve generalization capability through a mechanism of attention that scales feature representations in spatial and channel dimensions at the element level. However, this approach does not explicitly model the fusion of image features and gradient features.
Based on this, it is necessary to provide a new image super-resolution reconstruction method to solve the above technical problems.
Disclosure of Invention
In view of the above-mentioned situation, the main object of the present invention is to provide a method for reconstructing super-resolution images based on joint trilateral feature filtering, which is used to solve the above-mentioned technical problems.
The embodiment of the invention provides an image super-resolution reconstruction method based on combined trilateral feature filtering, wherein the method comprises the following steps:
the method comprises the steps of firstly, down-sampling a high-resolution image to obtain a low-resolution image, constructing a single-image super-resolution reconstruction model, and inputting the low-resolution image into the single-image super-resolution reconstruction model, wherein the single-image super-resolution reconstruction model comprises an image reconstruction branch and a gradient prediction branch, the image reconstruction branch at least comprises a feature refining module, and the gradient prediction branch at least comprises a gradient refining module;
secondly, extracting and obtaining multi-level image features through the feature thinning module in the image reconstruction branch;
step three, extracting and obtaining multi-level gradient characteristics through the gradient refining module in the gradient prediction branch;
performing fusion guidance by combining a trilateral feature filtering module based on the image reconstruction branch and the gradient prediction branch to realize self-adaptive adjustment of a convolution kernel of a target domain, and reconstructing a high-resolution image from coarse to fine;
and fifthly, when the single-image super-resolution reconstruction model is iterated to be converged, carrying out forward inference on the single-image super-resolution reconstruction model to finally obtain a super-resolution reconstructed image.
The invention provides an image super-resolution reconstruction method based on combined trilateral feature filtering, which comprises the steps of firstly, extending a filtering definition domain from a pixel domain to a high-dimensional feature domain, and simulating a combined trilateral feature filter defined on an image feature domain and a gradient feature domain by constructing a neural network structure; by sensing corresponding features on an image feature domain and a gradient feature domain and combining with the trilateral feature filtering module, the convolution kernel of the features of the target domain is adaptively adjusted, and the generalization performance of the feature fusion module is improved;
and secondly, based on a combined trilateral feature filtering module, alternately taking the image features and the gradient features as guide domain features, deeply excavating mutual guide effect between the gradient domain and the image domain, and realizing bidirectional fusion of the feature domains. Compared with the most advanced method at present, the image super-resolution reconstruction method based on the combined trilateral feature filtering achieves the best effect of subjective evaluation and objective evaluation.
The image super-resolution reconstruction method based on the combined trilateral feature filtering is characterized in that a plurality of high-resolution images form a high-resolution image data set, and the method for processing the high-resolution image data set comprises the following steps:
dividing the high-resolution image data set into a training set, a verification set and a test set;
down-sampling a high resolution image in the high resolution image dataset to generate a corresponding low resolution image;
and correspondingly cutting the high-resolution image and the low-resolution image into paired sub image blocks according to a preset image size, selecting a specific sub image block, and performing random overturning and rotation to perform data enhancement, thereby finally obtaining the training set.
The image super-resolution reconstruction method based on the combined trilateral feature filtering is characterized in that the image reconstruction branch comprises a first shallow feature extraction module, a feature refinement module and an image up-sampling module which are sequentially connected;
the first shallow feature extraction module consists of two convolution layers and is used for extracting and obtaining the first shallow image features
Figure 812278DEST_PATH_IMAGE001
The feature refinement module comprises
Figure 985770DEST_PATH_IMAGE002
A plurality of residual dense connecting blocks which are connected in sequence and used for extracting and obtaining multi-level image characteristics
Figure 793189DEST_PATH_IMAGE003
Figure 292304DEST_PATH_IMAGE004
Wherein
Figure 122856DEST_PATH_IMAGE002
representing the maximum number of residual dense connection blocks, each comprising two basic units, wherein the current residual dense connection block is used for the multi-level image features obtained by extraction
Figure 113422DEST_PATH_IMAGE005
Updating through bidirectional feature fusion before inputting to the next residual dense connecting block;
the image up-sampling module is used for converting the image into a digital image
Figure 509768DEST_PATH_IMAGE002
In a residual dense connecting block
Figure 445363DEST_PATH_IMAGE006
The outputs of the elementary cells are stitched in channel dimensions as input to the image upsampling module.
The image super-resolution reconstruction method based on the combined trilateral feature filtering is characterized in that the gradient prediction branch comprises a second shallow feature extraction module, a gradient refinement module and a gradient reconstruction module;
the second shallow feature extraction module consists of two convolution layers and is used for extracting and obtaining second shallow image features
Figure 559949DEST_PATH_IMAGE007
The ladderThe degree refining module comprises
Figure 809665DEST_PATH_IMAGE008
A plurality of sequentially connected residual blocks for extracting multi-level gradient features
Figure 811250DEST_PATH_IMAGE009
Figure 917747DEST_PATH_IMAGE010
Wherein
Figure 519629DEST_PATH_IMAGE008
representing the maximum number of residual blocks, each of which comprises two elementary units, wherein the current residual block is used for multi-level gradient features to be extracted
Figure 104194DEST_PATH_IMAGE009
Updating through bidirectional feature fusion before inputting to the next residual block;
the gradient reconstruction module is used for carrying out multi-level gradient feature after refinement
Figure 678395DEST_PATH_IMAGE009
As input, a high resolution gradient map is then output.
The image super-resolution reconstruction method based on the combined trilateral feature filtering is characterized in that a loss function corresponding to the single-image super-resolution reconstruction model is expressed as follows:
Figure 706525DEST_PATH_IMAGE011
wherein,
Figure 795704DEST_PATH_IMAGE012
the corresponding loss value of the loss function is represented,
Figure 918381DEST_PATH_IMAGE013
presentation pairThe operation of taking the minimum value of the loss function,
Figure 347088DEST_PATH_IMAGE014
represents the maximum number of training images and,
Figure 795387DEST_PATH_IMAGE015
a sequence number representing the training image is shown,
Figure 857015DEST_PATH_IMAGE016
is shown as
Figure 48962DEST_PATH_IMAGE015
A high resolution original image in a training image,
Figure 394493DEST_PATH_IMAGE017
is shown as
Figure 951376DEST_PATH_IMAGE015
A low resolution original image in a training image,
Figure 749568DEST_PATH_IMAGE018
is shown as
Figure 63366DEST_PATH_IMAGE015
The bicubic interpolation result corresponding to the training image,
Figure 528983DEST_PATH_IMAGE019
is shown as
Figure 53505DEST_PATH_IMAGE015
Parameters in sheet training images at low resolution
Figure 542255DEST_PATH_IMAGE020
The corresponding model is generated according to the model,
Figure 810426DEST_PATH_IMAGE021
a value representing the weight of the balance weight,
Figure 881281DEST_PATH_IMAGE022
is shown as
Figure 842284DEST_PATH_IMAGE015
A high resolution gradient map in a training image,
Figure 615068DEST_PATH_IMAGE023
representing the low resolution gradient map in the first training image,
Figure 624612DEST_PATH_IMAGE024
is shown as
Figure 799241DEST_PATH_IMAGE015
Parameters under high-resolution gradient map in training image
Figure 681878DEST_PATH_IMAGE020
And generating a corresponding model.
The image super-resolution reconstruction method based on the combined trilateral feature filtering is characterized in that bidirectional feature fusion represents image features at multiple levels
Figure 941958DEST_PATH_IMAGE025
And multi-level gradient feature
Figure 755193DEST_PATH_IMAGE009
And the combined trilateral feature filtering module performs feature fusion alternately to realize image-guided gradient feature enhancement and gradient-guided image feature enhancement.
The image super-resolution reconstruction method based on the combined trilateral feature filtering is characterized in that in the combined trilateral feature filtering module, an expression corresponding to the fusion feature of the target feature obtained through output is as follows:
Figure 784329DEST_PATH_IMAGE026
wherein,
Figure 87134DEST_PATH_IMAGE027
indicating pre-fusion channels
Figure 585243DEST_PATH_IMAGE028
In space of
Figure 998907DEST_PATH_IMAGE029
The characteristics of (a) to (b),
Figure 85811DEST_PATH_IMAGE030
represents the weights of the convolution kernels corresponding to the features of the target domain,
Figure 559518DEST_PATH_IMAGE031
representing the convolution kernel weights corresponding to the guiding domain features,
Figure 528611DEST_PATH_IMAGE032
a local window is represented that is a window of a scene,
Figure 493768DEST_PATH_IMAGE033
the representation being located in a local window
Figure 497497DEST_PATH_IMAGE032
A fused feature of the central target feature.
The image super-resolution reconstruction method based on the combined trilateral feature filtering is characterized in that in the combined trilateral feature filtering module, target domain features are used for convolution operation
Figure 142105DEST_PATH_IMAGE034
In that
Figure 801756DEST_PATH_IMAGE035
Dimensionally in the order of
Figure 557222DEST_PATH_IMAGE036
A traversal is made for the window size to obtain a plurality of three-dimensional image blocks,
Figure 166189DEST_PATH_IMAGE037
wherein, in the process,
Figure 981699DEST_PATH_IMAGE038
respectively representing the number, height and width of channels;
wherein the processed target domain has a characteristic size of
Figure 925384DEST_PATH_IMAGE039
Figure 687804DEST_PATH_IMAGE040
In order to perform the batch processing number,
Figure 400545DEST_PATH_IMAGE041
the representation of the real number field is performed,
Figure 137688DEST_PATH_IMAGE042
representing a coordinate dimension of
Figure 568669DEST_PATH_IMAGE043
A high-dimensional tensor space over a real domain of (a).
The image super-resolution reconstruction method based on the combined trilateral feature filtering is characterized in that a kernel function of an image feature and a kernel function of a gradient feature are learned through two sub-networks with the same structure in the combined trilateral feature filtering module respectively;
sub-network learning from image features
Figure 869200DEST_PATH_IMAGE044
Kernel function to image features
Figure 702027DEST_PATH_IMAGE045
The mapping of (a), wherein,
Figure 859339DEST_PATH_IMAGE046
Figure 528349DEST_PATH_IMAGE047
sub-network learning from gradient features
Figure 429309DEST_PATH_IMAGE048
Kernel function to gradient features
Figure 54325DEST_PATH_IMAGE049
The mapping of (a), wherein,
Figure 382538DEST_PATH_IMAGE050
Figure 788112DEST_PATH_IMAGE051
by kernel functions of image features
Figure 240565DEST_PATH_IMAGE045
Kernel function of gradient feature
Figure 782405DEST_PATH_IMAGE049
Multiplying the corresponding elements to obtain a combined trilateral feature filtering kernel
Figure 484782DEST_PATH_IMAGE052
Wherein,
Figure 377652DEST_PATH_IMAGE053
representing a coordinate dimension of
Figure 620414DEST_PATH_IMAGE054
A high-dimensional tensor space over a real number domain of,
Figure 767493DEST_PATH_IMAGE055
represents a coordinate dimension of
Figure 640771DEST_PATH_IMAGE056
A high-dimensional tensor space over a real number domain.
The invention also provides an image super-resolution reconstruction system based on the combined trilateral feature filtering, wherein the system comprises:
a model building module to:
the method comprises the steps of downsampling a high-resolution image to obtain a low-resolution image, constructing a single-image super-resolution reconstruction model, and inputting the low-resolution image into the single-image super-resolution reconstruction model, wherein the single-image super-resolution reconstruction model comprises an image reconstruction branch and a gradient prediction branch, the image reconstruction branch at least comprises a feature thinning module, and the gradient prediction branch at least comprises a gradient thinning module;
a first extraction module to:
extracting and obtaining multi-level image features through the feature thinning module in the image reconstruction branch;
a second extraction module to:
extracting and obtaining multi-level gradient characteristics through the gradient refining module in the gradient prediction branch;
a feature fusion module to:
based on the image reconstruction branch and the gradient prediction branch, fusion guidance is carried out through a combined trilateral feature filtering module to realize self-adaptive adjustment of a convolution kernel of a target domain, and reconstruction of a high-resolution image is carried out from coarse to fine;
an iterative convergence module to:
and when the single-image super-resolution reconstruction model is iterated to be converged, carrying out forward inference on the single-image super-resolution reconstruction model to finally obtain a super-resolution reconstructed image.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of an image super-resolution reconstruction method based on combined trilateral feature filtering according to the present invention;
FIG. 2 is a network topology diagram of an image reconstruction branch and a gradient prediction branch in the image super-resolution reconstruction method based on combined trilateral feature filtering provided by the invention;
fig. 3 is a structural diagram of an image super-resolution reconstruction system based on combined trilateral feature filtering according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
These and other aspects of embodiments of the invention will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the ways in which the principles of the embodiments of the invention may be practiced, but it is understood that the scope of the embodiments of the invention is not limited correspondingly. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Referring to fig. 1 and fig. 2, the present invention provides an image super-resolution reconstruction method based on a combined trilateral feature filtering, wherein the method includes the following steps:
s101, down-sampling is carried out on the high-resolution image to obtain a low-resolution image, a single-image super-resolution reconstruction model is constructed, and the low-resolution image is input into the single-image super-resolution reconstruction model.
The single-image super-resolution reconstruction model comprises an image reconstruction branch and a gradient prediction branch, wherein the image reconstruction branch at least comprises a characteristic thinning module, and the gradient prediction branch at least comprises a gradient thinning module.
It should be noted here that the neural network model (image super-resolution reconstruction model based on joint trilateral feature filtering) in the present invention is only used for predicting the result of bicubic interpolation corresponding to an image
Figure 20937DEST_PATH_IMAGE057
And high resolution original image
Figure 801811DEST_PATH_IMAGE058
The residual error between.
In step S101, a plurality of high resolution images constitute a high resolution image data set, and the method of processing the high resolution image data set includes the steps of:
s1011, dividing the high-resolution image data set into a training set, a verification set and a test set;
s1012, down-sampling the high resolution image in the high resolution image dataset to generate a corresponding low resolution image;
and S1013, correspondingly cutting the high-resolution image and the low-resolution image into paired sub-image blocks according to a preset image size, and selecting a specific sub-image block to perform data enhancement through random overturning and rotation, thereby finally obtaining the training set.
For the single-image super-resolution reconstruction model, the loss function corresponding to the single-image super-resolution reconstruction model is represented as:
Figure 318243DEST_PATH_IMAGE011
wherein,
Figure 362422DEST_PATH_IMAGE012
the corresponding loss value of the loss function is represented,
Figure 715037DEST_PATH_IMAGE013
denotes an operation of taking the minimum value of the loss function,
Figure 565181DEST_PATH_IMAGE014
represents the maximum number of training images and,
Figure 936120DEST_PATH_IMAGE059
the sequence number of the training image is represented,
Figure 947938DEST_PATH_IMAGE016
denotes the first
Figure 974800DEST_PATH_IMAGE059
A high resolution original image in a training image,
Figure 113789DEST_PATH_IMAGE017
denotes the first
Figure 604813DEST_PATH_IMAGE059
A low resolution original image in a training image,
Figure 787532DEST_PATH_IMAGE018
is shown as
Figure 364007DEST_PATH_IMAGE059
The result of the bicubic interpolation corresponding to a training image,
Figure 493637DEST_PATH_IMAGE019
is shown as
Figure 321392DEST_PATH_IMAGE059
Parameters in a training image at low resolution
Figure 940592DEST_PATH_IMAGE020
The corresponding model is generated according to the model,
Figure 4363DEST_PATH_IMAGE021
a value representing the weight of the balance weight,
Figure 734421DEST_PATH_IMAGE022
is shown as
Figure 872142DEST_PATH_IMAGE059
A high resolution gradient map in a training image,
Figure 412975DEST_PATH_IMAGE023
is shown as
Figure 964042DEST_PATH_IMAGE059
A low resolution gradient map in a training image,
Figure 232213DEST_PATH_IMAGE024
is shown as
Figure 286756DEST_PATH_IMAGE059
Parameters under high-resolution gradient map in training image
Figure 451022DEST_PATH_IMAGE020
And generating a corresponding model.
S102, extracting and obtaining multi-level image features through the feature thinning module in the image reconstruction branch.
Specifically, the image reconstruction branch comprises a first shallow feature extraction module, a feature thinning module and an image upsampling module which are sequentially connected.
The first shallow layer feature extraction module consists of two convolution layers and is used for extracting and obtaining the first shallow layer image features
Figure 708959DEST_PATH_IMAGE001
. To further extract features, a feature refinement module includes
Figure 46399DEST_PATH_IMAGE002
A plurality of residual dense connecting blocks which are connected in sequence and used for extracting and obtaining the multi-level image characteristics
Figure 221029DEST_PATH_IMAGE025
Figure 87353DEST_PATH_IMAGE004
Wherein,
Figure 550696DEST_PATH_IMAGE002
representing the maximum number of residual dense connected blocks, each comprising two elementary units. It should be noted that the current residual dense connection block is used forAt the multi-level image features obtained by extraction
Figure 911401DEST_PATH_IMAGE025
Before inputting to the next residual dense connection block, updating is carried out through bidirectional feature fusion.
Meanwhile, the image up-sampling module is used for converting the image into a digital image
Figure 206116DEST_PATH_IMAGE002
In a residual dense connecting block
Figure 446605DEST_PATH_IMAGE006
The outputs of the elementary cells are stitched in channel dimensions as input to the image upsampling module. Furthermore, unlike the feature refinement module, the image upsampling module only performs gradient-guided image feature enhancement.
S103, extracting and obtaining multi-level gradient characteristics through the gradient refining module in the gradient prediction branch.
The gradient prediction branch comprises a second shallow layer feature extraction module, a gradient refinement module and a gradient reconstruction module.
The second shallow layer feature extraction module consists of two convolution layers and is used for extracting and obtaining the second shallow layer image features
Figure 193981DEST_PATH_IMAGE007
. To further extract features, the gradient refinement module includes
Figure 873224DEST_PATH_IMAGE008
A plurality of sequentially connected residual blocks for extracting to obtain multi-level gradient characteristics
Figure 504669DEST_PATH_IMAGE009
Figure 509534DEST_PATH_IMAGE010
Wherein,
Figure 478627DEST_PATH_IMAGE008
representing the maximum number of residual blocks, each residual block comprising two elementary units. It should be noted that the current residual block is used for multi-level gradient features obtained by extraction
Figure 633665DEST_PATH_IMAGE009
Before inputting to the next residual block, updating is carried out through bidirectional feature fusion. The gradient reconstruction module is used for carrying out multi-level gradient feature after refinement
Figure 388125DEST_PATH_IMAGE009
As input, a high resolution gradient map is then output. Furthermore, the low resolution gradient image of the gradient prediction branch input is generated by a Sobel operator.
And S104, performing fusion guidance by combining a trilateral feature filtering module based on the image reconstruction branch and the gradient prediction branch to realize self-adaptive adjustment of convolution kernels of a target domain, and reconstructing a high-resolution image from coarse to fine.
It should be noted that bidirectional feature fusion represents image features at multiple levels
Figure 32733DEST_PATH_IMAGE060
And multi-level gradient feature
Figure 223543DEST_PATH_IMAGE009
And the combined trilateral feature filtering module performs feature fusion alternately to realize image-guided gradient feature enhancement and gradient-guided image feature enhancement. Since the quality of the gradient features of the low-resolution domain is low, the image features are firstly used for refining the corresponding gradient features, and then the direction of feature fusion is changed alternately along a preset sequence, and in the process, the image features and the gradient features are strengthened from coarse to fine.
In addition, for the above combined trilateral feature filtering module, in the combined trilateral feature filtering module, the expression corresponding to the fusion feature of the target feature obtained by output is as follows:
Figure 979010DEST_PATH_IMAGE026
wherein,
Figure 587977DEST_PATH_IMAGE027
indicating pre-fusion channels
Figure 669065DEST_PATH_IMAGE028
In space of
Figure 612750DEST_PATH_IMAGE029
The above-mentioned features of the present invention,
Figure 906328DEST_PATH_IMAGE030
represents the weights of the convolution kernels corresponding to the features of the target domain,
Figure 822332DEST_PATH_IMAGE031
representing the convolution kernel weights corresponding to the guiding domain features,
Figure 559475DEST_PATH_IMAGE032
a local window is represented that is a window of a scene,
Figure 990456DEST_PATH_IMAGE033
the representation being located in a local window
Figure 353304DEST_PATH_IMAGE032
A fused feature of the central target feature.
Further, in the combined trilateral feature filtering module, the target domain feature is used for carrying out convolution operation
Figure 920552DEST_PATH_IMAGE034
In that
Figure 281126DEST_PATH_IMAGE035
Dimensionally in the order of
Figure 947206DEST_PATH_IMAGE036
A traversal is made for the window size to obtain a plurality of three-dimensional image blocks,
Figure 582587DEST_PATH_IMAGE037
wherein
Figure 535500DEST_PATH_IMAGE038
respectively representing the number of channels, height and width.
Thus, the processed target domain feature size is
Figure 863713DEST_PATH_IMAGE039
Figure 206969DEST_PATH_IMAGE040
In order to perform the batch processing number,
Figure 662353DEST_PATH_IMAGE041
the representation of the real number field is performed,
Figure 204192DEST_PATH_IMAGE042
representing a coordinate dimension of
Figure 968886DEST_PATH_IMAGE043
A high-dimensional tensor space over a real domain of (a).
At the same time, the learned combined trilateral filtering kernel size is
Figure 596176DEST_PATH_IMAGE039
. A Joint Trilateral Filtering (JTF) convolution operation in
Figure 324092DEST_PATH_IMAGE061
Is dimensionally passed through
Figure 189280DEST_PATH_IMAGE062
And completing inner product independently in dimensions.
In this embodiment, in the above combined trilateral feature filtering module, a kernel function of an image feature and a kernel function of a gradient feature are learned through two subnetworks with the same structure, respectively;
sub-network learning from image features
Figure 859296DEST_PATH_IMAGE044
Kernel function to image features
Figure 239461DEST_PATH_IMAGE045
The mapping of (a), wherein,
Figure 20336DEST_PATH_IMAGE046
Figure 287500DEST_PATH_IMAGE047
sub-network learning from gradient features
Figure 331679DEST_PATH_IMAGE048
Kernel function to gradient features
Figure 933562DEST_PATH_IMAGE049
The mapping of (a), wherein,
Figure 783706DEST_PATH_IMAGE050
Figure 154645DEST_PATH_IMAGE051
by kernel functions of image features
Figure 369726DEST_PATH_IMAGE045
Kernel function of gradient feature
Figure 941128DEST_PATH_IMAGE049
The corresponding elements are multiplied to obtain a combined trilateral feature filtering kernel
Figure 329384DEST_PATH_IMAGE052
Wherein,
Figure 820408DEST_PATH_IMAGE053
represents a coordinate dimension of
Figure 3128DEST_PATH_IMAGE054
A high-dimensional tensor space over a real number domain of,
Figure 782865DEST_PATH_IMAGE055
representing a coordinate dimension of
Figure 459965DEST_PATH_IMAGE056
A high-dimensional tensor space over a real number domain. Target Domain characteristics (dimension of
Figure 805496DEST_PATH_IMAGE063
) Each position of (A) has
Figure 159117DEST_PATH_IMAGE064
Generating individual kernel weights corresponding to kernel sizes
Figure 222888DEST_PATH_IMAGE036
. Inspired by Involution, channels are divided into multiple groups, and the groups are internal
Figure 438099DEST_PATH_IMAGE065
The channels share spatially different cores to further compress the model, i.e. the number of channels of a core
Figure 841399DEST_PATH_IMAGE066
. Hence, a joint trilateral feature filtering kernel
Figure 631500DEST_PATH_IMAGE052
Is measured by
Figure 916988DEST_PATH_IMAGE067
Is updated to
Figure 450738DEST_PATH_IMAGE068
After updating the kernel function of the image feature and the kernel function of the gradient feature, in the combined trilateral feature filtering module, the expression corresponding to the fusion feature of the target feature is updated as follows:
Figure 256014DEST_PATH_IMAGE069
wherein,
Figure 154700DEST_PATH_IMAGE070
representing the spatial position of a target domain in a set of channels
Figure 927484DEST_PATH_IMAGE029
The kernel function learned in the above-mentioned manner,
Figure 264924DEST_PATH_IMAGE071
representing the spatial position of the guiding domain in the set of channels
Figure 173974DEST_PATH_IMAGE029
The kernel function learned above.
The number of residual dense connection blocks and residual blocks in the feature refinement module and the gradient refinement module of the present invention is set to 5, the kernel size of all bottleneck layers is 1 × 1, and the kernel size of other convolutional layers is set to 3 × 3. The number of channels of the first convolutional layer of the shallow feature extraction modules of the image reconstruction branch and the gradient prediction branch is set to be 128, and the number of channels of other features is 64. Number of channels for design of combined trilateral filtering module
Figure 509141DEST_PATH_IMAGE072
Number of shared channels in a sum group
Figure 251444DEST_PATH_IMAGE073
Set to 32 and 2, respectively.
And S105, when the single-image super-resolution reconstruction model is iterated to be converged, carrying out forward inference on the single-image super-resolution reconstruction model to finally obtain a super-resolution reconstructed image.
In this step, after the single-image super-resolution reconstruction model iterates to converge, the residual error output by the single-image super-resolution reconstruction model is added to the bicubic up-sampled image of the initial low-resolution image to obtain the predicted super-resolution reconstructed image.
The invention provides an image super-resolution reconstruction method based on combined trilateral feature filtering, which comprises the steps of firstly, extending a filtering definition domain from a pixel domain to a high-dimensional feature domain, and simulating a combined trilateral feature filter defined on an image feature domain and a gradient feature domain by constructing a neural network structure; by sensing corresponding features on an image feature domain and a gradient feature domain and combining with the trilateral feature filtering module, the convolution kernel of the features of the target domain is adaptively adjusted, and the generalization performance of the feature fusion module is improved;
and secondly, based on a combined trilateral feature filtering module, alternately taking the image features and the gradient features as guide domain features, deeply excavating mutual guide effect between the gradient domain and the image domain, and realizing bidirectional fusion of the feature domains. Compared with the most advanced method at present, the image super-resolution reconstruction method based on the combined trilateral feature filtering achieves the best effect of subjective evaluation and objective evaluation.
Referring to fig. 3, the present invention further provides an image super-resolution reconstruction system based on joint trilateral feature filtering, wherein the system includes:
a model building module to:
the method comprises the steps of downsampling a high-resolution image to obtain a low-resolution image, constructing a single-image super-resolution reconstruction model, and inputting the low-resolution image into the single-image super-resolution reconstruction model, wherein the single-image super-resolution reconstruction model comprises an image reconstruction branch and a gradient prediction branch, the image reconstruction branch at least comprises a feature thinning module, and the gradient prediction branch at least comprises a gradient thinning module;
a first extraction module to:
extracting and obtaining multi-level image features through the feature thinning module in the image reconstruction branch;
a second extraction module to:
extracting and obtaining multi-level gradient characteristics through the gradient refining module in the gradient prediction branch;
a feature fusion module to:
based on the image reconstruction branch and the gradient prediction branch, fusion guidance is carried out through a combined trilateral feature filtering module so as to realize self-adaptive adjustment of convolution kernels of a target domain, and high-resolution image reconstruction is carried out from coarse to fine;
an iterative convergence module to:
and when the single-image super-resolution reconstruction model is iterated to be converged, carrying out forward inference on the single-image super-resolution reconstruction model to finally obtain a super-resolution reconstructed image.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that various changes and modifications can be made by those skilled in the art without departing from the spirit of the invention, and these changes and modifications are all within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (5)

1. An image super-resolution reconstruction method based on combined trilateral feature filtering is characterized by comprising the following steps:
the method comprises the steps of firstly, down-sampling a high-resolution image to obtain a low-resolution image, constructing a single-image super-resolution reconstruction model, and inputting the low-resolution image into the single-image super-resolution reconstruction model, wherein the single-image super-resolution reconstruction model comprises an image reconstruction branch and a gradient prediction branch, the image reconstruction branch at least comprises a feature refining module, and the gradient prediction branch at least comprises a gradient refining module;
secondly, extracting and obtaining multi-level image features through the feature thinning module in the image reconstruction branch;
step three, extracting and obtaining multi-level gradient characteristics through the gradient refining module in the gradient prediction branch;
performing fusion guidance by combining a trilateral feature filtering module based on the image reconstruction branch and the gradient prediction branch to realize self-adaptive adjustment of a convolution kernel of a target domain, and reconstructing a high-resolution image from coarse to fine;
fifthly, when the single-image super-resolution reconstruction model is iterated to be converged, carrying out forward inference on the single-image super-resolution reconstruction model to finally obtain a super-resolution reconstructed image;
the image reconstruction branch comprises a first shallow layer feature extraction module, a feature thinning module and an image up-sampling module which are sequentially connected;
the first shallow feature extraction module consists of two convolution layers and is used for extracting and obtaining a first shallow graphImage characteristics
Figure 591704DEST_PATH_IMAGE001
The feature refinement module comprises
Figure 364488DEST_PATH_IMAGE002
A plurality of residual dense connecting blocks which are connected in sequence and used for extracting and obtaining multi-level image characteristics
Figure 170770DEST_PATH_IMAGE003
Figure 345399DEST_PATH_IMAGE004
Wherein
Figure 211724DEST_PATH_IMAGE002
representing the maximum number of residual dense connecting blocks, each of which comprises two basic units, wherein the current residual dense connecting block is used for extracting the obtained multi-level image features
Figure 471804DEST_PATH_IMAGE005
Updating through bidirectional feature fusion before inputting to the next residual dense connecting block;
the image up-sampling module is used for converting the image into a digital image
Figure 84707DEST_PATH_IMAGE002
In a residual dense connecting block
Figure 113842DEST_PATH_IMAGE006
The outputs of the basic units are spliced in the channel dimension to be used as the input of the image up-sampling module;
the gradient prediction branch comprises a second shallow layer feature extraction module, a gradient refinement module and a gradient reconstruction module;
the second shallow feature extractorThe fetching module consists of two convolution layers and is used for extracting and obtaining the characteristics of the second shallow image
Figure 416648DEST_PATH_IMAGE007
The gradient refining module comprises
Figure 898445DEST_PATH_IMAGE008
A plurality of sequentially connected residual blocks for extracting multi-level gradient features
Figure 312109DEST_PATH_IMAGE009
Figure 195751DEST_PATH_IMAGE010
Wherein
Figure 669458DEST_PATH_IMAGE008
representing the maximum number of residual blocks, each of which comprises two elementary units, wherein the current residual block is used for multi-level gradient features to be extracted
Figure 638551DEST_PATH_IMAGE011
Updating through bidirectional feature fusion before inputting to the next residual block;
the gradient reconstruction module is used for carrying out multi-level gradient feature after refinement
Figure 590326DEST_PATH_IMAGE012
As input, then outputting a high resolution gradient map;
the loss function corresponding to the single-image super-resolution reconstruction model is expressed as follows:
Figure DEST_PATH_IMAGE013
wherein,
Figure 328475DEST_PATH_IMAGE014
the corresponding loss value of the loss function is represented,
Figure 973083DEST_PATH_IMAGE015
representing the operation of taking the minimum value of the loss function,
Figure 429472DEST_PATH_IMAGE016
represents the maximum number of training images and,
Figure 184938DEST_PATH_IMAGE017
the sequence number of the training image is represented,
Figure 777594DEST_PATH_IMAGE018
is shown as
Figure 593103DEST_PATH_IMAGE017
A high resolution original image in a training image,
Figure 536788DEST_PATH_IMAGE019
is shown as
Figure 98875DEST_PATH_IMAGE017
A low resolution original image in a training image,
Figure 811616DEST_PATH_IMAGE020
is shown as
Figure 532448DEST_PATH_IMAGE017
The result of the bicubic interpolation corresponding to a training image,
Figure 963429DEST_PATH_IMAGE021
denotes the first
Figure 60698DEST_PATH_IMAGE017
Stretch training pictureParameters in low resolution images
Figure 893525DEST_PATH_IMAGE022
The corresponding model is generated according to the model,
Figure 785257DEST_PATH_IMAGE023
a value representing the weight of the balance weight,
Figure 703535DEST_PATH_IMAGE024
is shown as
Figure 870074DEST_PATH_IMAGE017
A high resolution gradient map in a training image,
Figure 291828DEST_PATH_IMAGE025
is shown as
Figure 620041DEST_PATH_IMAGE017
A low resolution gradient map in a training image,
Figure 760036DEST_PATH_IMAGE026
is shown as
Figure 464686DEST_PATH_IMAGE017
Parameters under high-resolution gradient map in training image
Figure 6526DEST_PATH_IMAGE022
Generating a corresponding model;
bi-directional feature fusion representing image features at multiple levels
Figure 505641DEST_PATH_IMAGE027
And multi-level gradient feature
Figure 132931DEST_PATH_IMAGE028
By the combined trilateral feature filteringThe modules alternate to implement image-guided gradient feature enhancement and gradient-guided image feature enhancement;
in the combined trilateral feature filtering module, the expression corresponding to the fusion feature of the target feature obtained by output is as follows:
Figure 378623DEST_PATH_IMAGE029
wherein,
Figure 774970DEST_PATH_IMAGE030
indicating pre-fusion channels
Figure 444985DEST_PATH_IMAGE031
In space of
Figure 559572DEST_PATH_IMAGE032
The above-mentioned features of the present invention,
Figure 340446DEST_PATH_IMAGE033
represents the weights of the convolution kernels corresponding to the features of the target domain,
Figure 60140DEST_PATH_IMAGE034
representing the convolution kernel weights corresponding to the guiding domain features,
Figure 697795DEST_PATH_IMAGE035
a local window is represented that is a window of a scene,
Figure 502940DEST_PATH_IMAGE036
the representation being located in a local window
Figure 87505DEST_PATH_IMAGE035
A fused feature of the central target feature.
2. The image super-resolution reconstruction method based on combined trilateral feature filtering according to claim 1, wherein a plurality of the high-resolution images form a high-resolution image data set, and the method for processing the high-resolution image data set comprises the following steps:
dividing the high-resolution image data set into a training set, a verification set and a test set;
down-sampling a high resolution image in the high resolution image data set to generate a corresponding low resolution image;
and correspondingly cutting the high-resolution image and the low-resolution image into paired sub image blocks according to a preset image size, selecting a specific sub image block, and performing random overturning and rotation to perform data enhancement, thereby finally obtaining the training set.
3. The image super-resolution reconstruction method based on the joint trilateral feature filtering of claim 1, wherein in the joint trilateral feature filtering module, target domain features are used for convolution operation
Figure 458444DEST_PATH_IMAGE037
In that
Figure 470262DEST_PATH_IMAGE038
Dimensionally in the order of
Figure 293861DEST_PATH_IMAGE039
A traversal is made for the window size to obtain a plurality of three-dimensional image blocks,
Figure 682117DEST_PATH_IMAGE040
wherein
Figure 907562DEST_PATH_IMAGE041
respectively representing the number, height and width of channels;
wherein the processed target domain has a characteristic size of
Figure 90282DEST_PATH_IMAGE042
Figure 666757DEST_PATH_IMAGE043
In order to perform the batch processing number,
Figure 593125DEST_PATH_IMAGE044
the representation of the real number field is performed,
Figure 664287DEST_PATH_IMAGE045
representing a coordinate dimension of
Figure 283487DEST_PATH_IMAGE046
A high-dimensional tensor space over a real domain of (a).
4. The image super-resolution reconstruction method based on joint trilateral feature filtering of claim 3, wherein in the joint trilateral feature filtering module, a kernel function of an image feature and a kernel function of a gradient feature are learned through two sub-networks with the same structure respectively;
sub-network learning from image features
Figure 81679DEST_PATH_IMAGE047
Kernel function to image features
Figure 811738DEST_PATH_IMAGE048
The mapping of (a), wherein,
Figure 746195DEST_PATH_IMAGE049
Figure 536297DEST_PATH_IMAGE050
sub-network learning from gradient features
Figure 821785DEST_PATH_IMAGE051
Kernel function to gradient features
Figure 89955DEST_PATH_IMAGE052
The mapping of (a), wherein,
Figure 410078DEST_PATH_IMAGE053
Figure 371081DEST_PATH_IMAGE054
by kernel functions characterizing images
Figure 878285DEST_PATH_IMAGE048
Kernel function of gradient feature
Figure 950147DEST_PATH_IMAGE052
The corresponding elements are multiplied to obtain a combined trilateral feature filtering kernel
Figure 124776DEST_PATH_IMAGE055
Wherein,
Figure 991101DEST_PATH_IMAGE056
representing a coordinate dimension of
Figure 251181DEST_PATH_IMAGE057
A high-dimensional tensor space over a real number domain of,
Figure 861154DEST_PATH_IMAGE058
represents a coordinate dimension of
Figure 893219DEST_PATH_IMAGE059
A high-dimensional tensor space over a real domain of (a).
5. A combined trilateral feature filtering-based image super-resolution reconstruction system, which applies the combined trilateral feature filtering-based image super-resolution reconstruction method of any one of the above claims 1 to 4, the system comprising:
a model building module to:
the method comprises the steps of down-sampling a high-resolution image to obtain a low-resolution image, constructing a single-image super-resolution reconstruction model, and inputting the low-resolution image into the single-image super-resolution reconstruction model, wherein the single-image super-resolution reconstruction model comprises an image reconstruction branch and a gradient prediction branch, the image reconstruction branch at least comprises a feature refining module, and the gradient prediction branch at least comprises a gradient refining module;
a first extraction module to:
extracting and obtaining multi-level image features through the feature thinning module in the image reconstruction branch;
a second extraction module to:
extracting and obtaining multi-level gradient characteristics through the gradient refining module in the gradient prediction branch;
a feature fusion module to:
based on the image reconstruction branch and the gradient prediction branch, fusion guidance is carried out through a combined trilateral feature filtering module so as to realize self-adaptive adjustment of convolution kernels of a target domain, and high-resolution image reconstruction is carried out from coarse to fine;
an iterative convergence module to:
and when the single-image super-resolution reconstruction model is iterated to be converged, carrying out forward inference on the single-image super-resolution reconstruction model to finally obtain a super-resolution reconstructed image.
CN202210924242.4A 2022-08-03 2022-08-03 Image super-resolution reconstruction method and system based on combined trilateral feature filtering Active CN114972043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210924242.4A CN114972043B (en) 2022-08-03 2022-08-03 Image super-resolution reconstruction method and system based on combined trilateral feature filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210924242.4A CN114972043B (en) 2022-08-03 2022-08-03 Image super-resolution reconstruction method and system based on combined trilateral feature filtering

Publications (2)

Publication Number Publication Date
CN114972043A CN114972043A (en) 2022-08-30
CN114972043B true CN114972043B (en) 2022-10-25

Family

ID=82969635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210924242.4A Active CN114972043B (en) 2022-08-03 2022-08-03 Image super-resolution reconstruction method and system based on combined trilateral feature filtering

Country Status (1)

Country Link
CN (1) CN114972043B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228786B (en) * 2023-05-10 2023-08-08 青岛市中心医院 Prostate MRI image enhancement segmentation method, device, electronic equipment and storage medium
CN116402692B (en) * 2023-06-07 2023-08-18 江西财经大学 Depth map super-resolution reconstruction method and system based on asymmetric cross attention
CN116523759B (en) * 2023-07-04 2023-09-05 江西财经大学 Image super-resolution reconstruction method and system based on frequency decomposition and restarting mechanism

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017075768A1 (en) * 2015-11-04 2017-05-11 北京大学深圳研究生院 Super-resolution image reconstruction method and device based on dictionary matching
CN112200720A (en) * 2020-09-29 2021-01-08 中科方寸知微(南京)科技有限公司 Super-resolution image reconstruction method and system based on filter fusion
EP3816928A1 (en) * 2019-11-04 2021-05-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and computer-readable storage medium
WO2021258529A1 (en) * 2020-06-22 2021-12-30 北京大学深圳研究生院 Image resolution reduction and restoration method, device, and readable storage medium
CN113920014A (en) * 2021-10-25 2022-01-11 江西财经大学 Neural-networking-based combined trilateral filter depth map super-resolution reconstruction method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064476B2 (en) * 2008-10-04 2015-06-23 Microsoft Technology Licensing, Llc Image super-resolution using gradient profile prior
CN109345449B (en) * 2018-07-17 2020-11-10 西安交通大学 Image super-resolution and non-uniform blur removing method based on fusion network
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017075768A1 (en) * 2015-11-04 2017-05-11 北京大学深圳研究生院 Super-resolution image reconstruction method and device based on dictionary matching
EP3816928A1 (en) * 2019-11-04 2021-05-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and computer-readable storage medium
WO2021258529A1 (en) * 2020-06-22 2021-12-30 北京大学深圳研究生院 Image resolution reduction and restoration method, device, and readable storage medium
CN112200720A (en) * 2020-09-29 2021-01-08 中科方寸知微(南京)科技有限公司 Super-resolution image reconstruction method and system based on filter fusion
CN113920014A (en) * 2021-10-25 2022-01-11 江西财经大学 Neural-networking-based combined trilateral filter depth map super-resolution reconstruction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image Super-Resolution reconstruction based on adaptive gradient field sharpening;T Li 等;《Digital Signal Processing (DSP)》;20130731;全文 *
利用双通道卷积神经网络的图像超分辨率算法;徐冉等;《中国图象图形学报》;20160516(第05期);全文 *
基于像素及梯度域双层深度卷积神经网络的页岩图像超分辨率重建;占文枢等;《科学技术与工程》;20180128(第03期);全文 *

Also Published As

Publication number Publication date
CN114972043A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN114972043B (en) Image super-resolution reconstruction method and system based on combined trilateral feature filtering
CN109389556B (en) Multi-scale cavity convolutional neural network super-resolution reconstruction method and device
CN111311518B (en) Image denoising method and device based on multi-scale mixed attention residual error network
CN110136066B (en) Video-oriented super-resolution method, device, equipment and storage medium
CN114549731B (en) Method and device for generating visual angle image, electronic equipment and storage medium
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
Cheng et al. Zero-shot image super-resolution with depth guided internal degradation learning
Singh et al. Survey on single image based super-resolution—implementation challenges and solutions
CN109447897B (en) Real scene image synthesis method and system
Couturier et al. Image denoising using a deep encoder-decoder network with skip connections
CN113837946B (en) Lightweight image super-resolution reconstruction method based on progressive distillation network
CN114049420B (en) Model training method, image rendering method, device and electronic equipment
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN113096239B (en) Three-dimensional point cloud reconstruction method based on deep learning
Bastanfard et al. Toward image super-resolution based on local regression and nonlocal means
CN116343052B (en) Attention and multiscale-based dual-temporal remote sensing image change detection network
CN112200720A (en) Super-resolution image reconstruction method and system based on filter fusion
CN112184587A (en) Edge data enhancement model, and efficient edge data enhancement method and system based on model
Wei et al. A-ESRGAN: Training real-world blind super-resolution with attention U-Net Discriminators
CN112184547A (en) Super-resolution method of infrared image and computer readable storage medium
CN113920014A (en) Neural-networking-based combined trilateral filter depth map super-resolution reconstruction method
CN113506305B (en) Image enhancement method, semantic segmentation method and device for three-dimensional point cloud data
CN113902611A (en) Image beautifying processing method and device, storage medium and electronic equipment
Sun et al. A rapid and accurate infrared image super-resolution method based on zoom mechanism
Zheng et al. Joint residual pyramid for joint image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant