CN108492264B - Single-frame image fast super-resolution method based on sigmoid transformation - Google Patents

Single-frame image fast super-resolution method based on sigmoid transformation Download PDF

Info

Publication number
CN108492264B
CN108492264B CN201810195727.8A CN201810195727A CN108492264B CN 108492264 B CN108492264 B CN 108492264B CN 201810195727 A CN201810195727 A CN 201810195727A CN 108492264 B CN108492264 B CN 108492264B
Authority
CN
China
Prior art keywords
image
super
resolution
correction term
resolution image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810195727.8A
Other languages
Chinese (zh)
Other versions
CN108492264A (en
Inventor
林再平
王龙光
安玮
盛卫东
李俊
曾瑶源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201810195727.8A priority Critical patent/CN108492264B/en
Publication of CN108492264A publication Critical patent/CN108492264A/en
Application granted granted Critical
Publication of CN108492264B publication Critical patent/CN108492264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of image processing, and relates to a single-frame image fast super-resolution method based on sigmoid transformation. The method comprises the following steps: (S1) acquiring an image to be processed; (S2) initializing, setting l to represent the number of iterations; (S3) carrying out bicubic interpolation on the gray level image to obtain a super-resolution image Xl(ii) a (S4) for XlCarrying out Gaussian blur and down-sampling processing to obtain a low-resolution image
Figure DDA0001593032590000011
Computing
Figure DDA0001593032590000012
Performing bicubic interpolation on the difference value of the image Y to obtain a correction term 1; (S5) sharpening the super-resolution image by sigmoid transformation to obtain a super-resolution image ZlCalculating ZlAnd super-resolution image XlAs correction term 2; (S6) updating the model pair XlUpdating to obtain super-resolution image Xl+1(ii) a (S7) judging whether the sum of the correction term 1 and the correction term 2 is less than a set threshold value, if so, terminating the iteration, and updating the image Xl+1Otherwise, the number of iterations l is increased by 1, and the process returns to step (S4).

Description

Single-frame image fast super-resolution method based on sigmoid transformation
Technical Field
The invention belongs to the field of image processing, and relates to a single-frame image fast super-resolution method based on sigmoid transformation.
Background
The vision is the most important way for human to obtain information, and has important significance for human to perceive and feel the external world. The image is used as a real photo of an external objective world and is an important carrier of visual information, the definition of the image has important influence on the acquisition of the visual information of people, and a low resolution ratio can lose a large amount of image details and influence the acquisition of the image information of people.
In recent years, with the continuous improvement of the camera manufacturing process level, the image resolution is greatly improved, but in part of application scenarios, the current resolution level is still not enough to meet the application requirements, and in addition, in part of application scenarios, the image quality is still relatively poor due to the limitations of transmission conditions, imaging environments and the like. Currently, under the limitations of hardware cost, process level and other problems, the resolution of a camera is difficult to be greatly improved in a short period, and is limited by the contradiction between the resolution and the field range, so that the resolution of the camera cannot be improved without limitation.
The traditional super-resolution method mainly depends on prior information to relieve the undercharacterization of the super-resolution problem so as to obtain a stable super-resolution result, and regular terms in the super-resolution iteration process are designed mostly by adopting smooth prior, sparse prior, edge prior and the like. However, the edge blurring effect is easily brought by the smooth prior, the detail information of the image is easily lost by the sparse prior, and the calculation amount of the edge prior is large and the adaptability is poor.
Disclosure of Invention
In order to solve the technical problem, the invention provides a single-frame image fast super-resolution method based on sigmoid transformation, which obtains better visual recovery effect, wherein the sigmoid function, namely an s-shaped curve function, has the basic form of
Figure BDA0001593032570000021
x is an independent variable, f (x) is a function value, and e is a natural base number. The specific technical scheme is as follows.
A single-frame image fast super-resolution method based on sigmoid transformation comprises the following steps:
(S1) acquiring an image to be processed, if the image to be processed is a gray image, directly switching to the step (S2), and if the image to be processed is a color image, converting the color image into the gray image;
(S2) initializing, where l denotes the number of iterations, l is an integer, an initial value l is 0, and Y denotes the grayscale image in step (S1);
(S3) carrying out bicubic interpolation on the gray level image Y to obtain a super-resolution imageImage Xl
(S4) for the super-resolution image X based on the degradation model of the low-resolution imagelPerforming Gaussian blur and down-sampling to obtain low-resolution image
Figure BDA0001593032570000022
Computing
Figure BDA0001593032570000023
Performing bicubic interpolation on the difference value with the gray image Y to obtain a correction term 1 of the super-resolution image;
(S5) carrying out sharpening processing on the super-resolution image by utilizing sigmoid transformation to obtain a sharpened super-resolution image ZlCalculating the sharpened image ZlAnd super-resolution image XlAs the correction term 2 of the super-resolution image;
(S6) according to the correction term 1 and the correction term 2, the super-resolution image X is processed based on the super-resolution image updating modellUpdating to obtain updated super-resolution image Xl+1
(S7) judging whether the sum of the correction term 1 and the correction term 2 is less than a set threshold value, if so, terminating the iteration, and updating the super-resolution image Xl+1Otherwise, the number of iterations l is increased by 1, and the process returns to step (S4).
Preferably, the step (S4) further includes the following steps: adaptive determination of position transformation parameters b in sigmoid transformations using correction terms 11iThe method comprises the following steps:
for the correction term 1 of the super-resolution image, the position parameters corresponding to different pixels in sigmoid transformation are estimated by using the following formula:
b1iβ × correction term 1i
Wherein b is1iA correction term 1 is a position transformation parameter corresponding to a pixel i in the sigmoid transformation processiβ is a constant for the value corresponding to the pel i in correction term 1.
Preferably, in the step (S1), if the image to be processed is a color image, the color image is converted into a grayscale image, RGB channels of the color image are converted into YUV channels, a Y channel image is extracted to obtain a grayscale image, and bicubic interpolation is performed on U, V two channel images respectively to obtain a result after U, V channel interpolation; and the step (S7) further comprises the step of merging the final super-resolution image with the result obtained after the U, V channel interpolation, and converting the merged result into an RGB space to obtain a super-resolution color image.
Preferably, the step (S5) acquires a sharpened super-resolution image ZlThe specific process comprises the following steps: extracting image blocks P with partial overlap in super-resolution imageiFor each image block PiCarrying out sharpening processing by utilizing sigmoid transformation to obtain a sharpened image block
Figure BDA0001593032570000031
For the sharpened image block
Figure BDA0001593032570000032
Accumulating, calculating the average value of the overlapped area to obtain the sharpened super-resolution image Zl
Preferably, the updated model of the super-resolution image in the step (S6) is:
Xl+1=Xl- η × (correction term 1+ λ × correction term 2)
Where η is the learning rate and λ is the regular coefficient.
Preferably, the learning rate η is 0.1, and the regular coefficient λ is 0.1.
The beneficial effects obtained by adopting the invention are as follows: the method realizes the self-adaptive sharpening based on the local image block by utilizing the parameter transformation of the sigmoid function, has better robustness and anti-noise capability while realizing the edge enhancement, and obtains better visual recovery effect.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a sharpening process of an image block based on sigmoid transformation;
fig. 3 is a schematic diagram of a low-resolution input image that is integrally sharpened by extracting overlapping image blocks.
Detailed Description
The invention is further illustrated by the following figures and examples.
As shown in fig. 1, is a flow chart of the method of the present invention, which specifically includes the following steps:
the first step is as follows: acquiring an image to be processed, directly transferring to the second step if the image to be processed is a gray image, and converting the color image into the gray image if the image to be processed is a color image; converting an RGB channel of the low-resolution color image into a YUV channel, and extracting a Y-channel image to obtain a low-resolution gray image; YUV refers to a color space representation format with separated brightness parameters and chrominance parameters;
the second step is that: initializing, where l denotes the number of iterations, l is an integer, an initial value l is 0, and Y denotes the grayscale image in step (S1);
the third step: carrying out bicubic interpolation on the gray level image Y to obtain an initial super-resolution image X0
The fourth step: for super-resolution image X according to degradation model of low-resolution imagelPerforming Gaussian blur and down-sampling to obtain low-resolution image
Figure BDA0001593032570000041
Computing
Figure BDA0001593032570000042
Performing bicubic interpolation on the difference value with the low-resolution image Y to obtain a correction term 1 of the super-resolution image;
adaptive determination of position transformation parameters b in sigmoid transformations using correction terms 11i
For the correction term 1 of the super-resolution image, the position parameters corresponding to different pixels in sigmoid transformation are estimated by using the following formula:
b1iβ × correction term 1i
Wherein b is1iA correction term 1 is a position transformation parameter corresponding to a pixel i in the sigmoid transformation processiFor the value corresponding to the pixel i in correction term 1, β is 0.05.
For the obtained super-resolution image XlAnd carrying out re-degradation on the input low-resolution image by using a degradation model of the image, namely:
Figure BDA0001593032570000051
wherein XlFor the super-resolution image obtained in the first iteration, H is Gaussian blur operation, D is downsampling operation,
Figure BDA0001593032570000052
the degradation result obtained in the l iteration.
The fifth step: sharpening the super-resolution image by utilizing sigmoid transformation to obtain a sharpened super-resolution image ZlCalculating the sharpened image ZlAnd super-resolution image XlAs the correction term 2 of the super-resolution image;
firstly, image blocks in a super-resolution image are extracted, and in order to ensure consistency and smoothness of edges between the image blocks, the image blocks are extracted in a cross overlapping mode, that is, a certain overlapping area exists between each image block and an adjacent image block, as shown in fig. 3. In order to guarantee that the local structure information of the image can be reserved in the image blocks, the size of the image blocks is guaranteed to cover a 3 x 3 area in the low-resolution image, namely the size of the image blocks is 3 gammax3 gamma, wherein gamma is an up-sampling factor of the super-resolution image. After the sizes of the image blocks are determined, the size of the overlapping area between the image blocks is further determined, the larger the overlapping area is, the better the reconstruction effect is, but the larger the calculation amount is, and the size of the overlapping between the adjacent image blocks is selected to be 2 γ in this embodiment.
After the extracted image block is obtained, sharpening is performed on the image block by utilizing sigmoid transformation, as shown in fig. 2:
(1) for the extracted image block, calculating the minimum value min of the image element of the image blockpatchAs a base, the gray value z of each pixel of the image blockiSubtract base min respectivelypatchObtaining an image block residual value, wherein patch represents the extracted image block, and i represents the pixel sequence number value in the image block;
(2) calculating the maximum value max of the image elements of the image blockpatchObtaining the normalized constant of the image block
Figure BDA0001593032570000065
By using
Figure BDA0001593032570000061
Normalizing the image residual block with the substrate removed, and quantizing the image value to [0, 1%]Y ofiRepresenting a quantized value of a pixel i in the image block after quantization;
(3) firstly, normalized image block value y is transformed by sigmoid parameteriProjected under sigmoid argument space x, i.e.:
Figure BDA0001593032570000062
then y is converted by using the sigmoid function after parameter transformationiX in the corresponding argument spaceiRe-projecting to image space to obtain sharpened value yi':
Figure BDA0001593032570000063
Wherein
Figure BDA0001593032570000064
Sigmoid functions, a, before and after parameter transformation0=1,
b0=0,a1=2,b1iAnd converting the position parameters obtained in the fourth step.
(4) And quantizing the image again by using the image value after sigmoid transformation and combining the image normalization constant and the image substrate to obtain the final sharpening result of the image block.
After the sharpening result of each image block is obtained, the sharpening result is filled in the corresponding position of the image according to the position of the image block, the average value of the overlapped area is calculated to be used as the final sharpening result, and as shown in fig. 3, the super-resolution image X is calculatedlAnd sharpening image ZlThe difference value of (a) is used as a correction term 2 of the super-resolution image.
And a sixth step: according to the correction term 1 and the correction term 2, based on the update model of the super-resolution image, the super-resolution image is updated by using the following formula to obtain an updated super-resolution image Xl+1The specific implementation is shown in fig. 1;
Xl+1=Xl- η × (correction term 1+ λ × correction term 2)
Can also be expressed as:
Figure BDA0001593032570000071
wherein η is the learning rate of super-resolution image update, λ is the regularization coefficient, which is used to weigh the weight of sharpening regularization in the update process,
Figure BDA0001593032570000072
presentation pair
Figure BDA0001593032570000073
In this example, η is 0.1 and λ is 0.1.
The seventh step: judging whether the sum of the correction term 1 and the correction term 2 is less than a set threshold value, if so, terminating iteration, and updating the super-resolution image Xl+1And (5) obtaining a final super-resolution image, otherwise, increasing the iteration number l by 1, and returning to the fourth step.
For gray level image, the iteration result is the final super-resolution result, if the input image is color image, Xl+1And combining the final super-resolution result of the Y channel with the bi-cubic interpolation result of the U, V channel obtained in the first step, and converting the YUV channel to the RGB channel again to obtain the final super-resolution result of the color image.
In order to verify the effectiveness of the method, 18 pictures are selected from the Set5 and Set14 standard data sets as test images (Baby, Bird, Butterfly, and so on, see table 1 specifically), comparison tests are performed with the current mainstream 6 super-resolution algorithms, and the peak signal-to-noise ratio (PSNR) and the Structural Similarity (SSIM) are selected as evaluation indexes. Wherein, the peak signal-to-noise ratio is used to express the ratio of the maximum possible power of the signal to the destructive noise power in the signal processing, the structural similarity is used to describe the similarity between 2 images, both are used to reflect the super-resolution effect, and the higher the value is, the better the super-resolution effect is. The calculation formulas of the two are as follows:
Figure BDA0001593032570000081
wherein X and O are respectively the final image obtained after super-resolution processing and the original high-resolution image corresponding to the low-resolution input image Y, M and N respectively represent the length and width values of the super-resolution final image, i and j respectively represent the serial number values of the pixels in the image in the length and width directions, and muxyThe image mean, sigma, of the original image and the super-resolution image respectivelyxyImage standard deviation, c, of the original image and the super-resolution image, respectively1,c2Is constant, L represents the maximum dynamic range of an image pel, which is typically [0,255 ] for a standard RGB image]I.e. L255, k1=0.01,k2=0.03。
Table 1 test results of each mainstream algorithm in 18 pictures
Figure BDA0001593032570000082
Figure BDA0001593032570000091
Figure BDA0001593032570000101
The test result shows that the PSNR and SSIM index values of the method are kept optimal in the comparison method, and the effectiveness of the method in super resolution of single-frame images is demonstrated. Meanwhile, the method is simple in processing process and short in processing time, and the sharpening operation of the image blocks can be operated in parallel, so that the processing efficiency of the method is greatly improved, and the method has good real-time performance. Compared with other methods, the method does not need to rely on learning and training of a large number of samples, has strong adaptability and robustness, and is particularly suitable for application scenes lacking of sample data.
Currently, 6 types of relevant references of super-resolution algorithms, Sun, Zeyde, Timofte, Kim, Yang, Dong, are mainly as follows:
[1]J.Sun,et al.“Gradient profile prior and its applications in imagesuper-resolution and enhancement,”IEEE Trans.Image Process.,vol.20,no.6,pp.1529-1542,2011.
[2]R.Zeyde,M.Elad,and M.Protter.“On single image scale-up usingsparse-representations,”in Proc.International Conference on Curves andSurfaces,2010,pp.711–730.
[3]R.Timofte,V.De,and L.V.Gool.“Anchored neighborhood regression forfast example-based super-resolution,”in Proc.ICCV,2014,pp.1920-1927.
[4]K.I.Kim and Y.Kwon.“Single-image super-resolution using sparseregression and natural image prior,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.32,no.6,pp.1127-1133,2010.
[5]C.Yang,and M.H.Yang.“Fast direct super-resolution by simplefunctions,”in Proc.ICCV,2013,pp.561-568.
[6]W.Dong,L.Zhang,G.Shi,X.Wu,“Image deblurring and super-resolutionby adaptive sparse domain selection and adaptive regularization,”IEEETrans.Image Process.,vol.20,no.7,pp.1838-1857,2011.

Claims (7)

1. a single-frame image fast super-resolution method based on sigmoid transformation is characterized by comprising the following steps:
(S1) acquiring an image to be processed, if the image to be processed is a gray image, directly switching to the step (S2), and if the image to be processed is a color image, converting the color image into the gray image;
(S2) initializing, where l denotes the number of iterations, l is an integer, an initial value l is 0, and Y denotes the grayscale image in step (S1);
(S3) carrying out bicubic interpolation on the gray level image Y to obtain a super-resolution image Xl
(S4) for the super-resolution image X based on the degradation model of the low-resolution imagelCarrying out Gaussian blur and down-sampling processing to obtain a low-resolution image
Figure FDA0002351061250000011
Computing
Figure FDA0002351061250000012
Performing bicubic interpolation on the difference value with the gray image Y to obtain a correction term 1 of the super-resolution image;
(S5) carrying out sharpening processing on the super-resolution image by utilizing sigmoid transformation to obtain a sharpened super-resolution image ZlCalculating the sharpened image ZlAnd super-resolution image XlAs the correction term 2 of the super-resolution image;
(S6) according to the correction term 1 and the correction term 2, updating the super-resolution image based on the super-resolution image updating model to obtain an updated super-resolution image Xl+1
(S7) judging whether the sum of the correction term 1 and the correction term 2 is less than a set threshold value, if so, terminating the iteration, and updating the super-resolution image Xl+1If the image is the final super-resolution image, otherwise, the iteration number l is increased by 1, and the step is returned (S4);
the step (S4) further includes the following steps: adaptive determination of position transformation parameters b in sigmoid transformations using correction terms 11iThe method comprises the following steps:
for the correction term 1 of the super-resolution image, the position parameters corresponding to different pixels in sigmoid transformation are estimated by using the following formula:
b1iβ × correction term 1i
Wherein b is1iA correction term 1 is a position transformation parameter corresponding to a pixel i in the sigmoid transformation processiβ is a constant for the value corresponding to the pel i in correction term 1.
2. The single-frame image fast super-resolution method based on sigmoid transformation as claimed in claim 1, wherein: if the image to be processed is a color image in the step (S1), converting the color image into a gray scale image, converting an RGB channel of the color image into a YUV channel, extracting a Y channel image to obtain a gray scale image, and performing bicubic interpolation on U, V two channel images respectively to obtain a result after U, V channel interpolation; and the step (S7) further comprises the step of merging the final super-resolution image with the result obtained after the U, V channel interpolation, and converting the merged result into an RGB space to obtain a super-resolution color image.
3. The method as claimed in claim 1 or 2, wherein the degradation model of the low-resolution image is:
Figure FDA0002351061250000021
wherein XlFor the super-resolution image obtained in the first iteration, H is Gaussian blur operation, D is downsampling operation,
Figure FDA0002351061250000022
the degradation result obtained in the l iteration.
4. The method for fast super-resolution of single-frame image based on sigmoid transform as claimed in claim 1 or 2, wherein said step (S5) is to obtain sharpened super-resolution image ZlThe specific process comprises the following steps: extracting image blocks P with partial overlap in super-resolution imageiFor each image block PiCarrying out sharpening processing by using sigmoid transformation to obtain sharpenedImage block
Figure FDA0002351061250000023
For the sharpened image block
Figure FDA0002351061250000024
Accumulating, calculating the average value of the overlapped area to obtain the sharpened super-resolution image Zl
5. The method for fast super-resolution of single-frame image based on sigmoid transform as claimed in claim 1 or 2, wherein the updated model of super-resolution image in step (S6) is:
Xl+1=Xl- η × (correction term 1+ λ × correction term 2)
Where η is the learning rate and λ is the regular coefficient.
6. The single-frame image fast super-resolution method based on sigmoid transformation as claimed in claim 5, wherein the learning rate η is 0.1, and the regular coefficient λ is 0.1.
7. The single-frame image fast super-resolution method based on sigmoid transformation as claimed in claim 1, wherein said β value is 0.05.
CN201810195727.8A 2018-03-09 2018-03-09 Single-frame image fast super-resolution method based on sigmoid transformation Active CN108492264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810195727.8A CN108492264B (en) 2018-03-09 2018-03-09 Single-frame image fast super-resolution method based on sigmoid transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810195727.8A CN108492264B (en) 2018-03-09 2018-03-09 Single-frame image fast super-resolution method based on sigmoid transformation

Publications (2)

Publication Number Publication Date
CN108492264A CN108492264A (en) 2018-09-04
CN108492264B true CN108492264B (en) 2020-03-31

Family

ID=63338397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810195727.8A Active CN108492264B (en) 2018-03-09 2018-03-09 Single-frame image fast super-resolution method based on sigmoid transformation

Country Status (1)

Country Link
CN (1) CN108492264B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689341A (en) * 2020-05-18 2021-11-23 京东方科技集团股份有限公司 Image processing method and training method of image processing model
CN116363160B (en) * 2023-05-30 2023-08-29 杭州脉流科技有限公司 CT perfusion image brain tissue segmentation method and computer equipment based on level set

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Fast single image super-resolution based on sigmoid transformation;Longguang Wang, et al.;《https://www.researchgate.net/publication/319255915》;20170831;第1-14页 *

Also Published As

Publication number Publication date
CN108492264A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
Chen et al. Bilateral guided upsampling
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
US8731337B2 (en) Denoising and artifact removal in image upscaling
US10198801B2 (en) Image enhancement using self-examples and external examples
CN110738605A (en) Image denoising method, system, device and medium based on transfer learning
CN110136060B (en) Image super-resolution reconstruction method based on shallow dense connection network
Kato et al. Multi-frame image super resolution based on sparse coding
CN112837224A (en) Super-resolution image reconstruction method based on convolutional neural network
CN112801904B (en) Hybrid degraded image enhancement method based on convolutional neural network
CN108921783A (en) A kind of satellite image super resolution ratio reconstruction method based on losses by mixture function constraint
US11887218B2 (en) Image optimization method, apparatus, device and storage medium
CN104657951A (en) Multiplicative noise removal method for image
US20240054605A1 (en) Methods and systems for wavelet domain-based normalizing flow super-resolution image reconstruction
CN115578255A (en) Super-resolution reconstruction method based on inter-frame sub-pixel block matching
Arulkumar et al. Super resolution and demosaicing based self learning adaptive dictionary image denoising framework
Feng et al. Low-light image enhancement algorithm based on an atmospheric physical model
CN108492264B (en) Single-frame image fast super-resolution method based on sigmoid transformation
CN116934592A (en) Image stitching method, system, equipment and medium based on deep learning
CN116797462B (en) Real-time video super-resolution reconstruction method based on deep learning
CN113240581A (en) Real world image super-resolution method for unknown fuzzy kernel
Li et al. RGSR: A two-step lossy JPG image super-resolution based on noise reduction
Lu et al. Utilizing homotopy for single image superresolution
CN111275620B (en) Image super-resolution method based on Stacking integrated learning
Feng et al. Hierarchical guided network for low‐light image enhancement
Lu et al. Review of image inpainting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Lin Zaiping

Inventor after: Wang Longguang

Inventor after: An Wei

Inventor after: Sheng Weidong

Inventor after: Li Jun

Inventor after: Zeng Yaoyuan

Inventor before: Lin Zaiping

Inventor before: Wang Longguang

Inventor before: An Wei

Inventor before: Sheng Weidong

Inventor before: Li Jun

Inventor before: Zeng Yaoyuan