CN112053287B - Image super-resolution method, device and equipment - Google Patents

Image super-resolution method, device and equipment Download PDF

Info

Publication number
CN112053287B
CN112053287B CN202010952506.8A CN202010952506A CN112053287B CN 112053287 B CN112053287 B CN 112053287B CN 202010952506 A CN202010952506 A CN 202010952506A CN 112053287 B CN112053287 B CN 112053287B
Authority
CN
China
Prior art keywords
image
feature extraction
depth feature
super
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010952506.8A
Other languages
Chinese (zh)
Other versions
CN112053287A (en
Inventor
曲昭伟
谷嘉航
王晓茹
但家旺
徐培容
张珩
熊崧凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202010952506.8A priority Critical patent/CN112053287B/en
Publication of CN112053287A publication Critical patent/CN112053287A/en
Application granted granted Critical
Publication of CN112053287B publication Critical patent/CN112053287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image super-resolution method, device and equipment, wherein after an image to be subjected to image super-resolution is obtained, a super-resolution image corresponding to the image is obtained by using an image super-resolution model based on shallow features of the image and depth features of the image, and the depth features of the image comprise self texture features of the image. In the scheme, the image super-resolution model can utilize the texture features of the image, so that the texture features in the super-resolution image are clearer.

Description

Image super-resolution method, device and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for super-resolution of an image.
Background
The image super-resolution is to obtain a high-resolution image from a low-resolution image through super-resolution. For example, a low resolution image of 128 x 3, magnified by a factor of 2 by super resolution, will result in a high resolution image of 256 x 3.
Currently, the more common methods for super-resolution of images are mainly based on deep learning methods, such as the super-resolution model RCAN (residual channel attention network, published in Y.Zhang, K.Li, K.Li, L.Wang, B.Zhong, and Y.Fu, "Image super resolution using real resolution channel authentication networks," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp.286-301.), and the super-resolution model SAN (Second order attention network, published in T.Dai, J.Cai, Y.Zhang, S.Xia, and L.Zhang, "connected-order work for single Conference-resolution," in Proceedings of the Conference on video and program for single Conference-resolution, "in Proceedings of the Conference on video and the Vision, 2019, 1109, pp-11, published in the Second order attention network).
However, the current super-resolution model cannot apply the self-texture features of the image.
Disclosure of Invention
In view of the above problems, the present application provides an image super-resolution method, apparatus and device. The specific scheme is as follows:
an image super-resolution method, comprising:
acquiring an image to be subjected to super-resolution;
and obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using an image super-resolution model, wherein the depth feature of the image comprises the self texture feature of the image.
Optionally, the obtaining, by using an image super-resolution model, a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image includes:
utilizing a shallow feature extraction module of an image super-resolution model to perform shallow feature extraction processing on the image to obtain shallow features of the image;
utilizing a depth feature extraction module of an image super-resolution model to perform depth feature extraction processing on shallow features of the image to obtain depth features of the image;
and performing up-sampling reconstruction processing on the shallow feature of the image and the depth feature of the image by using an up-sampling reconstruction module of the image super-resolution model to obtain a super-resolution image corresponding to the image.
Optionally, the method for constructing the depth feature extraction module of the image super-resolution model includes:
a depth feature extraction module for constructing the image super-resolution model based on a pre-constructed depth feature extraction unit;
the depth feature extraction unit comprises a first preset number of branches, each branch is stacked by a different number of residual error units, the depth difference between adjacent branches is 2 times, and the adjacent branches share at least one convolution layer.
Optionally, the depth feature extraction module for constructing the image super-resolution model based on a pre-constructed depth feature extraction unit includes:
the depth feature extraction modules are connected in series with a plurality of depth feature extraction units and used for constructing the image super-resolution model;
wherein, in the plurality of depth feature extraction units, the input of a first depth feature extraction unit is the shallow feature of the image, and the input of each depth feature extraction unit except the first depth feature extraction unit is the output of the last depth feature extraction unit adjacent to the first depth feature extraction unit.
Optionally, the performing, by the depth feature extraction module of the image super-resolution model, depth feature extraction processing on the shallow feature of the image to obtain the depth feature of the image includes:
inputting the shallow features of the image into a first depth feature extraction unit of the depth feature extraction module to obtain the output of each depth feature extraction unit;
and fusing the output of each depth feature extraction unit to obtain the depth feature of the image.
Optionally, the depth feature extraction module for constructing the image super-resolution model based on a pre-constructed depth feature extraction unit includes:
widening the depth feature extraction unit to obtain a widened depth feature extraction unit, wherein the widened depth feature extraction unit is a depth feature extraction module of the image super-resolution model;
the widened depth feature extraction unit comprises a second preset number of branches, each branch is stacked by different numbers of residual error units, the depth difference between adjacent branches is 2 times, the adjacent branches share at least one convolution layer, and the second preset number is larger than the first preset number.
Optionally, the performing, by the depth feature extraction module of the image super-resolution model, depth feature extraction processing on the shallow feature of the image to obtain the depth feature of the image includes:
inputting the shallow features of the image into the widened depth feature extraction unit to obtain each branch output in the widened depth feature extraction unit;
and fusing the branch outputs to obtain the depth characteristics of the image.
Optionally, the image super-resolution model is trained in the following manner:
acquiring a training image and a super-resolution image corresponding to the training image;
and training by taking the training images as training samples, taking the super-resolution images corresponding to the training images as sample labels and taking the minimum loss function as a training target to obtain the image super-resolution model.
An image super-resolution device comprising:
the acquisition unit is used for acquiring an image to be subjected to super-resolution;
and the processing unit is used for obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using an image super-resolution model, wherein the depth feature of the image comprises the self texture feature of the image.
An image super-resolution device includes a memory and a processor;
the memory is used for storing programs;
the processor is used for executing the program and realizing the steps of the image super-resolution method.
By means of the technical scheme, the application discloses an image super-resolution method, device and equipment, after an image to be subjected to image super-resolution is obtained, a super-resolution image corresponding to the image is obtained by using an image super-resolution model based on shallow features of the image and depth features of the image, and the depth features of the image comprise self texture features of the image. In the scheme, the image super-resolution model can utilize the texture features of the image, so that the texture features in the super-resolution image are clearer.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic flow chart of an image super-resolution method disclosed in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an image super-resolution model disclosed in an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for obtaining a super-resolution image corresponding to an image based on a shallow feature of the image and a depth feature of the image by using an image super-resolution model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a depth feature extraction unit according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of a depth feature extraction module disclosed in an embodiment of the present application;
FIG. 6 is a schematic diagram of a depth feature extraction module disclosed in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of an image super-resolution device disclosed in an embodiment of the present application;
fig. 8 is a block diagram of a hardware structure of an image super-resolution device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Next, the image super-resolution method provided by the present application is described by the following embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image super-resolution method disclosed in an embodiment of the present application, where the method may include:
step S101: and acquiring an image to be subjected to super-resolution of the image.
In the present application, the image to be subjected to image super-resolution may be any image, and the image is generally a low-resolution image.
Step S102: and obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using an image super-resolution model, wherein the depth feature of the image comprises the self texture feature of the image.
In the present application, the image super-resolution model has a function of obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image, and particularly, it is emphasized that the depth feature of the image includes a self-texture feature of the image, and the self-texture feature of the image contributes to reconstruction of a texture in the super-resolution image corresponding to the image.
It should be noted that, the detailed description about the image super-resolution model will be described in detail by the following embodiments, and will not be expanded here.
The embodiment discloses an image super-resolution method, which comprises the steps of obtaining an image to be subjected to image super-resolution, obtaining a super-resolution image corresponding to the image based on shallow features of the image and depth features of the image by using an image super-resolution model, wherein the depth features of the image comprise self texture features of the image. In the scheme, the image super-resolution model can utilize the texture features of the image, so that the texture features in the super-resolution image are clearer.
In another embodiment of the present application, a structure of the image super-resolution model is described. Referring to fig. 2, fig. 2 is a schematic structural diagram of an image super-resolution model disclosed in an embodiment of the present application, and as shown in fig. 2, the image super-resolution model includes a shallow feature extraction module, a depth feature extraction module, and an up-sampling reconstruction module.
The input of the shallow feature extraction module is a low-resolution image, and the output is a shallow feature of the low-resolution image. As one possible implementation, the shallow feature extraction module may map the 3-channel RGB input image to 64-channel neural network mid-layer features (i.e., the shallow features of the RGB input image) using a layer of convolution.
The input of the depth feature extraction module is the output of the shallow feature extraction module, and the output of the shallow feature extraction module is the depth feature of the image.
The input of the up-sampling reconstruction module is the output of the shallow feature extraction module and the output of the depth feature extraction module, and the output is a high-resolution image.
Based on the structure of the image super-resolution model shown in fig. 2, in another embodiment of the present application, a specific implementation manner of obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using the image super-resolution model in step S102 is described.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for obtaining a super-resolution image corresponding to an image based on a shallow feature of the image and a depth feature of the image by using a super-resolution image model according to an embodiment of the present application, where the method may include the following steps:
step S201: and performing shallow feature extraction processing on the image by using a shallow feature extraction module of the image super-resolution model to obtain shallow features of the image.
Step S202: and performing depth feature extraction processing on the shallow features of the image by using a depth feature extraction module of the image super-resolution model to obtain the depth features of the image.
Step S203: and performing up-sampling reconstruction processing on the shallow feature of the image and the depth feature of the image by using an up-sampling reconstruction module of the image super-resolution model to obtain a super-resolution image corresponding to the image.
It should be noted that in the image super-resolution model shown in fig. 2, for specific implementation of the shallow feature extraction module and the up-sampling reconstruction module, reference may be made to related modules of an existing image super-resolution model (such as RCAN, SAN, etc.), but a depth feature extraction module of the existing image super-resolution model (such as RCAN, SAN, etc.) cannot apply the texture feature of an image itself, so that a new implementation of the depth feature extraction module is proposed in the present application, and is specifically described in detail by the following embodiments.
In another embodiment of the present application, a detailed description is given of a construction method of a depth feature extraction module of an image super-resolution model, and specifically, the depth feature extraction module of the image super-resolution model may be constructed based on a depth feature extraction unit constructed in advance. The depth feature extraction unit comprises a first preset number of branches, each branch is stacked by a different number of residual error units, the depth difference between adjacent branches is 2 times, and the adjacent branches share at least one convolution layer.
It should be noted that each branch is stacked by different numbers of residual units and is used for extracting semantic information of an image, the depth difference between adjacent branches is 2 times, cross-channel feature fusion is realized by using convolution with 1 × 1 in the middle stage, key features appearing for multiple times in different levels are extracted, and the characteristic that part of textures of the image appear repeatedly in a super-resolution task is effectively combined.
Referring to fig. 4, fig. 4 is a schematic diagram of a depth feature extraction unit according to an embodiment of the present disclosure. As shown in fig. 4, the depth feature extraction unit includes 4 branches, a first branch includes 1 residual unit RU, a second branch includes 2 residual units RU, a third branch includes 4 residual units RU, a fourth branch includes 8 residual units RU, the first branch and the second branch share 1 convolution layer 1 × 1conv, the second branch and the third branch share 2 convolution layers 1 × 1conv, and the third branch and the fourth branch share 3 convolution layers 1 × 1conv.
As an implementation manner, the depth feature extraction module for constructing the image super-resolution model based on a pre-constructed depth feature extraction unit includes: the depth feature extraction modules are connected in series with a plurality of depth feature extraction units and used for constructing the image super-resolution model; wherein, in the plurality of depth feature extraction units, the input of a first depth feature extraction unit is the shallow feature of the image, and the input of each depth feature extraction unit except the first depth feature extraction unit is the output of the last depth feature extraction unit adjacent to the first depth feature extraction unit.
Based on the above, the depth feature extraction module using the image super-resolution model performs depth feature extraction processing on the shallow features of the image to obtain the depth features of the image, and the depth feature extraction module includes: inputting the shallow features of the image into a first depth feature extraction unit of the depth feature extraction module to obtain the output of each depth feature extraction unit; and fusing the output of each depth feature extraction unit to obtain the depth feature of the image.
For ease of understanding, please refer to fig. 5, fig. 5 is a schematic diagram of a depth feature extraction module according to an embodiment of the present disclosure. As shown in fig. 5, the depth feature extraction module includes 12 depth feature extraction units connected in series. The input of the first depth feature extraction unit is the shallow feature of the image, the input of each subsequent depth feature extraction unit is the output of the previous depth feature extraction unit adjacent to the first depth feature extraction unit, and the outputs are fused to obtain the depth feature of the image.
As an implementation manner, the depth feature extraction module for constructing the image super-resolution model based on a pre-constructed depth feature extraction unit includes: widening the depth feature extraction unit to obtain a widened depth feature extraction unit, wherein the widened depth feature extraction unit is a depth feature extraction module of the image super-resolution model; the widened depth feature extraction unit comprises a second preset number of branches, each branch is stacked by different numbers of residual error units, the depth difference between adjacent branches is 2 times, the adjacent branches share at least one convolution layer, and the second preset number is larger than the first preset number.
Based on the above, the depth feature extraction module using the image super-resolution model performs depth feature extraction processing on the shallow features of the image to obtain the depth features of the image, and the depth feature extraction module includes: inputting the shallow features of the image into the widened depth feature extraction unit to obtain each branch output in the widened depth feature extraction unit; and fusing the branch outputs to obtain the depth characteristics of the image.
For ease of understanding, please refer to fig. 6, fig. 6 is a schematic diagram of a depth feature extraction module disclosed in an embodiment of the present application. As shown in fig. 6, the depth feature extraction module includes 7 branches.
In another embodiment of the present application, a method for training the image super-resolution model is described. The training mode of the image super-resolution model can comprise the following steps:
step S301: training images are acquired, and low-resolution images and high-resolution images corresponding to the training images are acquired.
In the application, the training images may be the first 800 images of a DIV2K training data set, and for each training image, a bicubic interpolation down-sampling may be performed to obtain a high resolution-low resolution pair, which is respectively used as a low resolution image and a high resolution image corresponding to the training image.
Step S302: and taking the low-resolution image corresponding to the training image as a training sample, taking the high-resolution image corresponding to the training image as a sample label, and taking the minimum loss function as a training target to train to obtain the image super-resolution model.
In this application, the loss function can be expressed as:
Figure BDA0002677488240000081
wherein theta represents the parameter setting of the image super-resolution model,
Figure BDA0002677488240000091
and &>
Figure BDA0002677488240000092
Representing pairs of low-resolution-high-resolution input images.
The image super-resolution device disclosed in the embodiment of the present application is described below, and the image super-resolution device described below and the image super-resolution method described above may be referred to in correspondence with each other.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an image super-resolution device disclosed in an embodiment of the present application. As shown in fig. 7, the image super-resolution device may include:
an acquisition unit 11, configured to acquire an image to be subjected to super-resolution of the image;
and the processing unit 12 is configured to obtain a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using an image super-resolution model, where the depth feature of the image includes a self-texture feature of the image.
Referring to fig. 8 and 8, a block diagram of a hardware structure of an image super-resolution device provided in an embodiment of the present application is shown, and referring to fig. 8, the hardware structure of the image super-resolution device may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
in the embodiment of the application, the number of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 is at least one, and the processor 1, the communication interface 2 and the memory 3 complete mutual communication through the communication bus 4;
the processor 1 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention, etc.;
the memory 3 may include a high-speed RAM memory, and may further include a non-volatile memory (non-volatile memory) or the like, such as at least one disk memory;
wherein the memory stores a program and the processor can call the program stored in the memory, the program for:
acquiring an image to be subjected to super-resolution;
and obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using an image super-resolution model, wherein the depth feature of the image comprises the self texture feature of the image.
Alternatively, the detailed function and the extended function of the program may refer to the above description.
Embodiments of the present application further provide a storage medium, where a program suitable for execution by a processor may be stored, where the program is configured to:
acquiring an image to be subjected to super-resolution;
and obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using an image super-resolution model, wherein the depth feature of the image comprises the self texture feature of the image.
Alternatively, the detailed function and the extended function of the program may be as described above.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "...," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. An image super-resolution method is characterized by comprising the following steps:
acquiring an image to be subjected to super-resolution;
obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using an image super-resolution model, wherein the depth feature of the image comprises the self texture feature of the image;
the obtaining of the super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using the image super-resolution model includes:
performing shallow feature extraction processing on the image by using a shallow feature extraction module of an image super-resolution model to obtain shallow features of the image;
utilizing a depth feature extraction module of an image super-resolution model to perform depth feature extraction processing on shallow features of the image to obtain depth features of the image;
performing up-sampling reconstruction processing on the shallow feature of the image and the depth feature of the image by using an up-sampling reconstruction module of an image super-resolution model to obtain a super-resolution image corresponding to the image;
the construction method of the depth feature extraction module of the image super-resolution model comprises the following steps:
a depth feature extraction module for constructing the image super-resolution model based on a pre-constructed depth feature extraction unit;
the depth feature extraction unit comprises a first preset number of branches, each branch is stacked by a different number of residual error units, the depth difference between adjacent branches is 2 times, and the adjacent branches share at least one convolution layer;
the depth feature extraction module for constructing the image super-resolution model based on the pre-constructed depth feature extraction unit comprises:
the depth feature extraction module is connected with a plurality of depth feature extraction units in series and used for constructing the image super-resolution model;
wherein, in the plurality of depth feature extraction units, the input of a first depth feature extraction unit is the shallow feature of the image, and the input of each depth feature extraction unit except the first depth feature extraction unit is the output of the last depth feature extraction unit adjacent to the first depth feature extraction unit.
2. The method according to claim 1, wherein the performing depth feature extraction processing on the shallow features of the image by using a depth feature extraction module of the image super-resolution model to obtain the depth features of the image comprises:
inputting the shallow features of the image into a first depth feature extraction unit of the depth feature extraction module to obtain the output of each depth feature extraction unit;
and fusing the output of each depth feature extraction unit to obtain the depth feature of the image.
3. The method according to claim 1, wherein the depth feature extraction module for constructing the image super-resolution model based on a pre-constructed depth feature extraction unit comprises:
widening the depth feature extraction unit to obtain a widened depth feature extraction unit, wherein the widened depth feature extraction unit is a depth feature extraction module of the image super-resolution model;
the widened depth feature extraction unit comprises a second preset number of branches, each branch is stacked by different numbers of residual error units, the depth difference between adjacent branches is 2 times, the adjacent branches share at least one convolution layer, and the second preset number is larger than the first preset number.
4. The method according to claim 3, wherein the performing depth feature extraction processing on the shallow features of the image by using a depth feature extraction module of the image super-resolution model to obtain the depth features of the image comprises:
inputting the shallow features of the image into the widened depth feature extraction unit to obtain each branch output in the widened depth feature extraction unit;
and fusing the branch outputs to obtain the depth characteristics of the image.
5. The method of claim 1, wherein the image super-resolution model is trained as follows:
acquiring a training image, and a low-resolution image and a high-resolution image corresponding to the training image;
and taking the low-resolution image corresponding to the training image as a training sample, taking the high-resolution image corresponding to the training image as a sample label, and taking the minimum loss function as a training target to train and obtain the image super-resolution model.
6. An image super-resolution device, comprising:
the acquisition unit is used for acquiring an image to be subjected to super-resolution;
the processing unit is used for obtaining a super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using an image super-resolution model, wherein the depth feature of the image comprises the self texture feature of the image;
the obtaining of the super-resolution image corresponding to the image based on the shallow feature of the image and the depth feature of the image by using the image super-resolution model includes:
performing shallow feature extraction processing on the image by using a shallow feature extraction module of an image super-resolution model to obtain shallow features of the image;
utilizing a depth feature extraction module of an image super-resolution model to perform depth feature extraction processing on shallow features of the image to obtain depth features of the image;
performing up-sampling reconstruction processing on the shallow feature of the image and the depth feature of the image by using an up-sampling reconstruction module of an image super-resolution model to obtain a super-resolution image corresponding to the image;
the construction method of the depth feature extraction module of the image super-resolution model comprises the following steps:
a depth feature extraction module for constructing the image super-resolution model based on a pre-constructed depth feature extraction unit;
the depth feature extraction unit comprises a first preset number of branches, each branch is stacked by different numbers of residual error units, the depth difference between adjacent branches is 2 times, and the adjacent branches share at least one convolution layer;
the depth feature extraction module for constructing the image super-resolution model based on the pre-constructed depth feature extraction unit comprises:
the depth feature extraction modules are connected in series with a plurality of depth feature extraction units and used for constructing the image super-resolution model;
wherein, in the plurality of depth feature extraction units, the input of a first depth feature extraction unit is the shallow feature of the image, and the input of each depth feature extraction unit except the first depth feature extraction unit is the output of the last depth feature extraction unit adjacent to the first depth feature extraction unit.
7. An image super-resolution device, comprising a memory and a processor;
the memory is used for storing programs;
the processor, which executes the program, realizes the steps of the image super-resolution method according to any one of claims 1 to 5.
CN202010952506.8A 2020-09-11 2020-09-11 Image super-resolution method, device and equipment Active CN112053287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010952506.8A CN112053287B (en) 2020-09-11 2020-09-11 Image super-resolution method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010952506.8A CN112053287B (en) 2020-09-11 2020-09-11 Image super-resolution method, device and equipment

Publications (2)

Publication Number Publication Date
CN112053287A CN112053287A (en) 2020-12-08
CN112053287B true CN112053287B (en) 2023-04-18

Family

ID=73610515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010952506.8A Active CN112053287B (en) 2020-09-11 2020-09-11 Image super-resolution method, device and equipment

Country Status (1)

Country Link
CN (1) CN112053287B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801875B (en) * 2021-02-05 2022-04-22 深圳技术大学 Super-resolution reconstruction method and device, computer equipment and storage medium
CN114049254B (en) * 2021-10-29 2022-11-29 华南农业大学 Low-pixel ox-head image reconstruction and identification method, system, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785236A (en) * 2019-01-21 2019-05-21 中国科学院宁波材料技术与工程研究所 A kind of image super-resolution method based on super-pixel and convolutional neural networks
CN109903226A (en) * 2019-01-30 2019-06-18 天津城建大学 Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks
CN110298791A (en) * 2019-07-08 2019-10-01 西安邮电大学 A kind of super resolution ratio reconstruction method and device of license plate image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288251A (en) * 2018-02-11 2018-07-17 深圳创维-Rgb电子有限公司 Image super-resolution method, device and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785236A (en) * 2019-01-21 2019-05-21 中国科学院宁波材料技术与工程研究所 A kind of image super-resolution method based on super-pixel and convolutional neural networks
CN109903226A (en) * 2019-01-30 2019-06-18 天津城建大学 Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks
CN110298791A (en) * 2019-07-08 2019-10-01 西安邮电大学 A kind of super resolution ratio reconstruction method and device of license plate image

Also Published As

Publication number Publication date
CN112053287A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN108062754B (en) Segmentation and identification method and device based on dense network image
CN109508681B (en) Method and device for generating human body key point detection model
EP3255586A1 (en) Method, program, and apparatus for comparing data graphs
CN111832570A (en) Image semantic segmentation model training method and system
JP6731529B1 (en) Single-pixel attack sample generation method, device, equipment and storage medium
CN112053287B (en) Image super-resolution method, device and equipment
CN114170167B (en) Polyp segmentation method and computer device based on attention-guided context correction
CN111914654B (en) Text layout analysis method, device, equipment and medium
CN114049332A (en) Abnormality detection method and apparatus, electronic device, and storage medium
CN110852980A (en) Interactive image filling method and system, server, device and medium
CN110827371A (en) Certificate photo generation method and device, electronic equipment and storage medium
CN112700460B (en) Image segmentation method and system
CN112950738B (en) Rendering engine processing method and device, storage medium and electronic equipment
CN111932480A (en) Deblurred video recovery method and device, terminal equipment and storage medium
CN116343052B (en) Attention and multiscale-based dual-temporal remote sensing image change detection network
CN112017162B (en) Pathological image processing method, pathological image processing device, storage medium and processor
CN111553861B (en) Image super-resolution reconstruction method, device, equipment and readable storage medium
CN110119736B (en) License plate position identification method and device and electronic equipment
CN113591528A (en) Document correction method, device, computer equipment and storage medium
Liu et al. A deep recursive multi-scale feature fusion network for image super-resolution
CN115345866A (en) Method for extracting buildings from remote sensing images, electronic equipment and storage medium
CN109685738A (en) A kind of method and device of improving image definition
CN111144407A (en) Target detection method, system, device and readable storage medium
CN112489103B (en) High-resolution depth map acquisition method and system
CN113643173A (en) Watermark removing method, watermark removing device, terminal equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant