CN111080516A - Super-resolution image reconstruction method based on self-sampling enhancement - Google Patents

Super-resolution image reconstruction method based on self-sampling enhancement Download PDF

Info

Publication number
CN111080516A
CN111080516A CN201911170154.4A CN201911170154A CN111080516A CN 111080516 A CN111080516 A CN 111080516A CN 201911170154 A CN201911170154 A CN 201911170154A CN 111080516 A CN111080516 A CN 111080516A
Authority
CN
China
Prior art keywords
resolution
image
self
design
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911170154.4A
Other languages
Chinese (zh)
Other versions
CN111080516B (en
Inventor
曹飞龙
张磊
张清华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Petrochemical Technology
Original Assignee
Guangdong University of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Petrochemical Technology filed Critical Guangdong University of Petrochemical Technology
Priority to CN201911170154.4A priority Critical patent/CN111080516B/en
Publication of CN111080516A publication Critical patent/CN111080516A/en
Application granted granted Critical
Publication of CN111080516B publication Critical patent/CN111080516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a super-resolution image reconstruction method based on self-sample enhancement. And designing two convolutional neural networks by utilizing an external training set to respectively extract the non-design characteristics of the low-resolution and high-resolution images. And then estimating the mapping relation between the low-resolution and high-resolution non-design features by utilizing anchor point nearest neighbor regression and a least square method, and connecting the two networks. Aiming at the output of the connection network, a residual error neural network is designed to improve the expression of image characteristics. And (3) constructing a self-sample training set by utilizing the output of a residual error network, estimating the mapping relation between the image with enhanced feature expression and the high-resolution image by combining an external sample and still utilizing an anchor point nearest neighbor regression and a least square method according to the self-similarity information of the image, and reconstructing the high-resolution image by utilizing the obtained mapping relation. The invention combines the advantages of deep learning and self-learning, can avoid the loss of image detail information and rebuild the complex structure of the image.

Description

Super-resolution image reconstruction method based on self-sampling enhancement
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a super-resolution image reconstruction method based on self-sample enhancement.
Background
Super-resolution image reconstruction is an important research topic in the field of digital image processing, and the main purpose of super-resolution image reconstruction is to improve image quality to meet the requirements of practical applications. Until now, researchers have conducted extensive research on reconstruction algorithms based on interpolation, statistical models and sample learning, and have achieved numerous results. However, the existing reconstruction methods still have the defects of poor reconstruction of complex textures and the like, and have a liftable space.
At present, the reconstruction algorithm based on interpolation proposed earlier is very fast in operation and implementation, but the reconstructed image is fuzzy and image details cannot be well expressed. In 2006, after the compressed sensing theory comes out, a new idea is provided for super-resolution image reconstruction. Based on the above, Yang et al propose a reconstruction algorithm based on sparse representation. The algorithm learns an overcomplete dictionary and represents high resolution image information with coefficients that are as sparse as possible. Although the super-resolution reconstruction effect is improved, the calculation complexity is high, and the image edge degradation still exists. In 2014, on the basis of the research results of predecessors, Timofte provides a reconstruction algorithm based on improved anchor point nearest neighbor regression, and the efficiency of model solution is improved by utilizing collaborative representation, but the structural information of an image is not considered. With the rise of deep learning, Dong et al proposed an image super-resolution reconstruction algorithm based on a convolutional neural network in 2014. The algorithm learns the mapping relation between the low-resolution image and the high-resolution image by utilizing a three-layer convolution network. Compared with the linear mapping relation studied by the predecessor, the nonlinear mapping relation learned by the algorithm obtains a remarkable reconstruction effect. However, training the convolutional network is performed on the basis of a huge number of learning samples, and the convolutional network cannot achieve an ideal effect without enough training samples.
Disclosure of Invention
In view of this, the invention provides a super-resolution image reconstruction method based on self-sample enhancement for realizing high-resolution image reconstruction, which is beneficial to presenting complex textures.
In order to achieve the above object, the present invention provides the following technical solutions: the super-resolution image reconstruction method based on self-sampling enhancement comprises the following steps: firstly, extracting non-design features of low-resolution and high-resolution images respectively by using independent convolution networks, then enhancing an output image of the feature extraction network by using a residual error network, then establishing a self-similar training set, learning a mapping relation between an enhanced reconstructed image containing a self sample and the high-resolution image, and finally estimating a reconstructed high-resolution image.
Optionally, the more specific method is:
a. giving an external training set, determining an amplification coefficient u, designing two independent convolutional neural networks, and respectively extracting non-design characteristics of a low-resolution image and a high-resolution image;
b. b, extracting patch blocks of p multiplied by p on the low-resolution and high-resolution characteristic graphs respectively according to the non-design characteristics obtained in the step a, and writing the patch blocks into a vector form to construct a sample set; training an over-complete combined dictionary, respectively searching a plurality of adjacent samples of each low-resolution dictionary atom in a sample set, estimating a mapping relation between low-resolution non-design features and high-resolution non-design features by utilizing a regression and least square method, and connecting two non-design feature extraction networks;
c. b, designing a residual convolution network according to the output of the two convolution networks connected in the step b, and performing feature expression enhancement on the output images of the two connected non-design feature extraction networks to obtain enhanced reconstructed images;
d. the method comprises the steps of sampling a low-resolution image to be reconstructed by u times, determining a sampling factor s for the down-sampled image, and constructing a multi-scale self-similar image; b, applying the mapping relation between the low-resolution non-design feature extraction network trained in the step a and the step b to the constructed self-similar image to obtain a connection network output image, and applying the step c to the output image to obtain a corresponding enhanced reconstructed image to form a self-similar sample set;
e. c, extracting a p multiplied by p patch block on the enhanced reconstructed image and the high-resolution image respectively by combining the self-similar sample set obtained in the step d and the external sample set obtained in the step c; determining a feature extraction operator, respectively extracting features of the enhanced reconstructed image and the high-resolution image patch, and estimating a mapping relation including self-sample learning by using anchor point nearest neighbor regression and a least square algorithm; and estimating the finally reconstructed high-resolution image according to the obtained mapping relation.
Optionally, in step a, the output of the non-design feature extraction network is approximately equal to its input; the network comprises three convolution layers, and the convolution kernel size of each layer is respectively as follows: 64 × 9 × 9, 32 × 1 × 1, and 1 × 5 × 5; the network learning and optimization can be expressed as:
Figure BDA0002288471350000031
wherein I represents an input image, l represents a convolution layer number index, o represents an output layer index, the parameter updating iteration uses an error back propagation algorithm, and finally, 32 non-design feature graphs corresponding to each training image are obtained.
Optionally, in step b, the low-resolution feature map used for collecting the patch block is a sum of 32 feature maps extracted by the corresponding non-design feature extraction network, the high-resolution feature map is 32 feature maps extracted by the corresponding non-design feature extraction network, and the patch block size p is 3 u; learning overcomplete dictionaries of different feature maps by a K-SVD method, wherein the number of nearest neighbor samples corresponding to each low-resolution dictionary atom is 2048, and a mapping relation formula is as follows:
Figure BDA0002288471350000032
where k denotes the dictionary index corresponding to the high resolution non-design feature map, j denotes the atom index of each low resolution dictionary,
Figure BDA0002288471350000033
a sample set representing nearest neighbor low resolution non-design features,
Figure BDA0002288471350000041
indicating a corresponding high resolution maskSample set of features, λuThe regularization coefficients are represented.
Optionally, in step c, the residual error network includes three convolutional layers, and the convolutional kernel size of each layer is: 64 × 9 × 9, 32 × 1 × 1, and 1 × 5 × 5; the network learning and optimization can be expressed as:
Figure BDA0002288471350000042
wherein IzThe output image of the connection network, i.e. the input image of the residual error network, is represented by l, the index of the number of convolution layers, o, the index of the output layer, ori, the original high-resolution image, and the parameter updating iteration uses an error back propagation algorithm.
Optionally, in step d, the sampling coefficient is 0.98, and the multi-scale parameter is 20.
Optionally, in step e, the patch block size p is 3 u; the feature extraction operators of the enhanced reconstructed image are first-order differential operators and second-order differential operators in the horizontal direction and the vertical direction, convolution operation is carried out on the enhanced reconstructed image patch block and the enhanced reconstructed image patch block is written into a vector form, and principal component analysis is utilized to carry out dimension reduction while redundant information in data is removed; the high-resolution feature extraction is to remove corresponding enhanced reconstructed image information from the high-resolution patch block; based on the extracted feature samples, training an overcomplete dictionary with the size of 1024, determining 2048 nearest neighbor samples of each enhanced reconstructed image dictionary atom, and estimating the mapping relation between the enhanced reconstructed image features and the high-resolution image features by using a regression and least square method, wherein the formula is as follows:
Figure BDA0002288471350000043
where j represents the atomic index of the enhanced reconstructed image dictionary,
Figure BDA0002288471350000044
a sample set representing nearest neighbor enhanced reconstructed image features,
Figure BDA0002288471350000045
sample set, λ, representing corresponding high resolution non-design featurescThe regularization coefficients are represented.
Compared with the prior art, the invention has the beneficial effects that:
extracting the non-design features of the image by using a deep convolutional network, and learning the mapping relation between the low-resolution and high-resolution non-design features by using anchor point nearest neighbor regression. And a non-linear mapping relation between the output of the residual error network learning characteristic extraction network and the high-resolution image is used, so that the image characteristics are better expressed. And finally, a reconstruction method of the image containing the self-sample information is explored by combining the multi-scale self-similar image. According to the method, the non-design characteristics of the image are firstly extracted through the convolution network, and then the high-resolution image with rich details is reconstructed by means of the structure and texture information contained in the self-similar image, so that the reconstruction effect and the reconstruction precision are improved.
Drawings
FIG. 1 is a schematic flow diagram of a connection of two non-design feature extraction networks;
FIG. 2 is a schematic flow chart of a self-similar training set construction;
FIG. 3a is a low resolution image at 2 magnification;
FIG. 3b is a high resolution image reconstructed with the present invention at a magnification of 2;
FIG. 4a is a low resolution image at a magnification of 3;
FIG. 4b is a high resolution image reconstructed with the present invention at a magnification of 3;
FIG. 5 is a schematic diagram of the general flow of the super-resolution image reconstruction algorithm of the present invention;
FIG. 6 is an example of an image non-design feature and a design feature.
Detailed Description
The method is based on image non-design feature learning and self-similarity sample learning, extracts image non-design features by utilizing a deep convolution network, and connects the mapping relation between the low-resolution non-design features and the high-resolution non-design features by utilizing anchor point nearest neighbor regression. And then learning a nonlinear mapping relation between the output of the feature extraction network and the high-resolution image by using a residual error network, and enhancing the reconstruction effect. And then constructing a multi-scale self-similar training set, still utilizing anchor point nearest neighbor regression, learning a linear mapping relation between the enhanced reconstructed image containing the self-sample and the high-resolution image, and finally reconstructing a high-resolution image.
The basic idea of the invention is as follows:
firstly, designing two independent convolutional neural networks, and respectively extracting non-design characteristics of a low-resolution image and a high-resolution image;
secondly, estimating a mapping relation between the low-resolution non-design features and the high-resolution non-design features by utilizing an anchor point nearest neighbor regression method and a least square method;
thirdly, designing a residual error network to enhance the output of the non-design feature extraction network;
and finally, establishing a multi-scale self-similarity training set and learning the mapping relation between the enhanced reconstructed image and the high-resolution graph.
The present invention will be further illustrated with reference to the following examples.
The invention provides a leukocyte positioning and iterative segmentation method, which comprises the following steps:
a. giving an external training set, determining an amplification coefficient u, designing two independent convolutional neural networks, and respectively extracting the non-design characteristics of the low-resolution image and the high-resolution image.
Wherein the output of the non-design feature extraction network is approximately equal to its input. The network comprises three convolution layers, and the convolution kernel size of each layer is respectively as follows: 64 × 9 × 9, 32 × 1 × 1, and 1 × 5 × 5; the network learning and optimization can be expressed as:
Figure BDA0002288471350000061
wherein I represents an input image, l represents a convolution layer number index, o represents an output layer index, the parameter updating iteration uses an error back propagation algorithm, and finally, 32 non-design feature graphs corresponding to each training image are obtained.
b. B, extracting patch blocks of p multiplied by p on the low-resolution and high-resolution characteristic graphs respectively according to the non-design characteristics obtained in the step a, and writing the patch blocks into a vector form to construct a sample set; and training an over-complete combined dictionary, respectively searching a plurality of adjacent samples of each low-resolution dictionary atom in a sample set, estimating the mapping relation between the low-resolution non-design features and the high-resolution non-design features by utilizing a regression and least square method, and connecting two non-design feature extraction networks.
The low-resolution feature map used for collecting the patch block is the sum of 32 feature maps extracted by a corresponding non-design feature extraction network, the high-resolution feature map is 32 feature maps extracted by a corresponding non-design feature extraction network, and the size p of the patch block is 3 u; learning overcomplete dictionaries of different feature maps by a K-SVD method, wherein the size of the overcomplete dictionaries is 1024, the number of nearest neighbor samples corresponding to each low-resolution dictionary atom is 2048, and a mapping relation formula is as follows:
Figure BDA0002288471350000071
where k denotes the dictionary index corresponding to the high resolution non-design feature map, j denotes the atom index of each low resolution dictionary,
Figure BDA0002288471350000072
a sample set representing nearest neighbor low resolution non-design features,
Figure BDA0002288471350000073
sample set, λ, representing corresponding high resolution non-design featuresuThe regularization coefficients are represented.
The process of connecting two non-design feature extraction networks is shown in fig. 1.
c. And c, designing a residual convolution network according to the output of the two convolution networks connected in the step b, and carrying out feature expression enhancement on the output images of the two connected non-designed feature extraction networks to obtain an enhanced reconstructed image.
The residual error network comprises three convolution layers, and the convolution kernel size of each layer is respectively as follows: 64 × 9 × 9, 32 × 1 × 1, and 1 × 5 × 5; the network learning and optimization can be expressed as:
Figure BDA0002288471350000074
wherein IzThe output image of the connection network, i.e. the input image of the residual error network, is represented by l, the index of the number of convolution layers, o, the index of the output layer, ori, the original high-resolution image, and the parameter updating iteration uses an error back propagation algorithm.
d. The method comprises the steps of sampling a low-resolution image to be reconstructed by u times, determining a sampling factor s for the down-sampled image, and constructing a multi-scale self-similar image; and (c) applying the step c to the output image to obtain a corresponding enhanced reconstructed image to form a self-similar sample set.
Fig. 2 is a schematic diagram of a process for constructing a self-similar training set, in which the sampling coefficient is 0.98 and the multi-scale parameter is 20.
e. C, extracting a p multiplied by p patch block on the enhanced reconstructed image and the high-resolution image respectively by combining the self-similar sample set obtained in the step d and the external sample set obtained in the step c; determining a feature extraction operator, respectively extracting features of the enhanced reconstructed image and the high-resolution image patch, and estimating a mapping relation including self-sample learning by using anchor point nearest neighbor regression and a least square algorithm; and estimating the finally reconstructed high-resolution image according to the obtained mapping relation.
Wherein, the patch block size p is 3 u; the feature extraction operators of the enhanced reconstructed image are first-order differential operators and second-order differential operators in the horizontal direction and the vertical direction, convolution operation is carried out on the patch block of the enhanced reconstructed image and the patch block is written into a vector form, and Principal Component Analysis (PCA) is utilized to carry out dimension reduction while removing redundant information in data; the high-resolution feature extraction is to remove corresponding enhanced reconstructed image information from the high-resolution patch block; based on the extracted feature samples, training an overcomplete dictionary with the size of 1024, determining 2048 nearest neighbor samples of each enhanced reconstructed image dictionary atom, and estimating the mapping relation between the enhanced reconstructed image features and the high-resolution image features by using a regression and least square method, wherein the formula is as follows:
Figure BDA0002288471350000081
where j represents the atomic index of the enhanced reconstructed image dictionary,
Figure BDA0002288471350000082
a sample set representing nearest neighbor enhanced reconstructed image features,
Figure BDA0002288471350000083
sample set, λ, representing corresponding high resolution non-design featurescThe regularization coefficients are represented.
As shown in fig. 3a, a low resolution image at a magnification of 2, and fig. 3b, a high resolution image reconstructed by the present invention. Fig. 4a shows a low-resolution image at a magnification of 3, and fig. 4b shows a high-resolution image reconstructed by the present invention.
Although the embodiments have been described and illustrated separately, it will be apparent to those skilled in the art that some common techniques may be substituted and integrated between the embodiments, and reference may be made to one of the embodiments not explicitly described, or to another embodiment described.
The above-described embodiments do not limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the above-described embodiments should be included in the protection scope of the technical solution.

Claims (7)

1. A super-resolution image reconstruction method based on self-sampling enhancement is characterized by comprising the following steps: the method comprises the following steps:
firstly, extracting non-design features of low-resolution and high-resolution images respectively by using independent convolution networks, then enhancing an output image of the feature extraction network by using a residual error network, then establishing a self-similar training set, learning a mapping relation between an enhanced reconstructed image containing a self sample and the high-resolution image, and finally estimating a reconstructed high-resolution image.
2. The super-resolution image reconstruction method based on self-sample enhancement according to claim 1, characterized in that: the method comprises the following more specific steps:
a. giving an external training set, determining an amplification coefficient u, designing two independent convolutional neural networks, and respectively extracting non-design characteristics of a low-resolution image and a high-resolution image;
b. b, extracting patch blocks of p multiplied by p on the low-resolution and high-resolution characteristic graphs respectively according to the non-design characteristics obtained in the step a, and writing the patch blocks into a vector form to construct a sample set; training an over-complete combined dictionary, respectively searching a plurality of adjacent samples of each low-resolution dictionary atom in a sample set, estimating a mapping relation between low-resolution non-design features and high-resolution non-design features by utilizing a regression and least square method, and connecting two non-design feature extraction networks;
c. b, designing a residual convolution network according to the output of the two convolution networks connected in the step b, and performing feature expression enhancement on the output images of the two connected non-design feature extraction networks to obtain enhanced reconstructed images;
d. the method comprises the steps of sampling a low-resolution image to be reconstructed by u times, determining a sampling factor s for the down-sampled image, and constructing a multi-scale self-similar image; b, applying the mapping relation between the low-resolution non-design feature extraction network trained in the step a and the step b to the constructed self-similar image to obtain a connection network output image, and applying the step c to the output image to obtain a corresponding enhanced reconstructed image to form a self-similar sample set;
e. c, extracting a p multiplied by p patch block on the enhanced reconstructed image and the high-resolution image respectively by combining the self-similar sample set obtained in the step d and the external sample set obtained in the step c; determining a feature extraction operator, respectively extracting features of the enhanced reconstructed image and the high-resolution image patch, and estimating a mapping relation including self-sample learning by using anchor point nearest neighbor regression and a least square algorithm; and estimating the finally reconstructed high-resolution image according to the obtained mapping relation.
3. The super-resolution image reconstruction method based on self-sample enhancement according to claim 2, wherein: in the step a, the output of the non-design feature extraction network is approximately equal to the input of the non-design feature extraction network; the network comprises three convolution layers, and the convolution kernel size of each layer is respectively as follows: 64 × 9 × 9, 32 × 1 × 1, and 1 × 5 × 5; the network learning and optimization can be expressed as:
Figure FDA0002288471340000021
wherein I represents an input image, l represents a convolution layer number index, o represents an output layer index, the parameter updating iteration uses an error back propagation algorithm, and finally, 32 non-design feature graphs corresponding to each training image are obtained.
4. The super-resolution image reconstruction method based on self-sample enhancement according to claim 2, wherein: in the step b, a low-resolution feature map used for collecting the patch block is the sum of 32 feature maps extracted by the corresponding non-design feature extraction network, a high-resolution feature map is 32 feature maps extracted by the corresponding non-design feature extraction network, and the size p of the patch block is 3 u; learning overcomplete dictionaries of different feature maps by a K-SVD method, wherein the number of nearest neighbor samples corresponding to each low-resolution dictionary atom is 2048, and a mapping relation formula is as follows:
Figure FDA0002288471340000022
where k denotes the dictionary index corresponding to the high resolution non-design feature map, j denotes the atom index of each low resolution dictionary,
Figure FDA0002288471340000023
a sample set representing nearest neighbor low resolution non-design features,
Figure FDA0002288471340000024
sample set, λ, representing corresponding high resolution non-design featuresuThe regularization coefficients are represented.
5. The super-resolution image reconstruction method based on self-example enhancement according to any one of claims 2 to 4, wherein: in the step c, the residual error network comprises three convolution layers, and the convolution kernel size of each layer is respectively as follows: 64 × 9 × 9, 32 × 1 × 1, and 1 × 5 × 5; the network learning and optimization can be expressed as:
Figure FDA0002288471340000031
wherein IzThe output image of the connection network, i.e. the input image of the residual error network, is represented by l, the index of the number of convolution layers, o, the index of the output layer, ori, the original high-resolution image, and the parameter updating iteration uses an error back propagation algorithm.
6. The super-resolution image reconstruction method according to claim 5, wherein: in the step d, the sampling coefficient is 0.98, and the multi-scale parameter is 20.
7. The super-resolution image reconstruction method according to claim 5, wherein: in the step e, the patch block size p is 3 u; the feature extraction operators of the enhanced reconstructed image are first-order differential operators and second-order differential operators in the horizontal direction and the vertical direction, convolution operation is carried out on the enhanced reconstructed image patch block and the enhanced reconstructed image patch block is written into a vector form, and principal component analysis is utilized to carry out dimension reduction while redundant information in data is removed; the high-resolution feature extraction is to remove corresponding enhanced reconstructed image information from the high-resolution patch block; based on the extracted feature samples, training an overcomplete dictionary with the size of 1024, determining 2048 nearest neighbor samples of each enhanced reconstructed image dictionary atom, and estimating the mapping relation between the enhanced reconstructed image features and the high-resolution image features by using a regression and least square method, wherein the formula is as follows:
Figure FDA0002288471340000032
where j represents the atomic index of the enhanced reconstructed image dictionary,
Figure FDA0002288471340000041
a sample set representing nearest neighbor enhanced reconstructed image features,
Figure FDA0002288471340000042
sample set, λ, representing corresponding high resolution non-design featurescThe regularization coefficients are represented.
CN201911170154.4A 2019-11-26 2019-11-26 Super-resolution image reconstruction method based on self-sample enhancement Active CN111080516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911170154.4A CN111080516B (en) 2019-11-26 2019-11-26 Super-resolution image reconstruction method based on self-sample enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911170154.4A CN111080516B (en) 2019-11-26 2019-11-26 Super-resolution image reconstruction method based on self-sample enhancement

Publications (2)

Publication Number Publication Date
CN111080516A true CN111080516A (en) 2020-04-28
CN111080516B CN111080516B (en) 2023-04-28

Family

ID=70311665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911170154.4A Active CN111080516B (en) 2019-11-26 2019-11-26 Super-resolution image reconstruction method based on self-sample enhancement

Country Status (1)

Country Link
CN (1) CN111080516B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628108A (en) * 2021-07-05 2021-11-09 上海交通大学 Image super-resolution method and system based on discrete representation learning and terminal
CN116228786A (en) * 2023-05-10 2023-06-06 青岛市中心医院 Prostate MRI image enhancement segmentation method, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170293825A1 (en) * 2016-04-08 2017-10-12 Wuhan University Method and system for reconstructing super-resolution image
WO2018120329A1 (en) * 2016-12-28 2018-07-05 深圳市华星光电技术有限公司 Single-frame super-resolution reconstruction method and device based on sparse domain reconstruction
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170293825A1 (en) * 2016-04-08 2017-10-12 Wuhan University Method and system for reconstructing super-resolution image
WO2018120329A1 (en) * 2016-12-28 2018-07-05 深圳市华星光电技术有限公司 Single-frame super-resolution reconstruction method and device based on sparse domain reconstruction
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628108A (en) * 2021-07-05 2021-11-09 上海交通大学 Image super-resolution method and system based on discrete representation learning and terminal
CN113628108B (en) * 2021-07-05 2023-10-27 上海交通大学 Image super-resolution method and system based on discrete representation learning and terminal
CN116228786A (en) * 2023-05-10 2023-06-06 青岛市中心医院 Prostate MRI image enhancement segmentation method, device, electronic equipment and storage medium
CN116228786B (en) * 2023-05-10 2023-08-08 青岛市中心医院 Prostate MRI image enhancement segmentation method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111080516B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN109087258B (en) Deep learning-based image rain removing method and device
CN111462013B (en) Single-image rain removing method based on structured residual learning
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN112381097A (en) Scene semantic segmentation method based on deep learning
CN112991472B (en) Image compressed sensing reconstruction method based on residual error dense threshold network
CN111127354B (en) Single-image rain removing method based on multi-scale dictionary learning
CN111222519B (en) Construction method, method and device of hierarchical colored drawing manuscript line extraction model
Luo et al. Lattice network for lightweight image restoration
CN108460749B (en) Rapid fusion method of hyperspectral and multispectral images
CN111861884A (en) Satellite cloud image super-resolution reconstruction method based on deep learning
CN111080516B (en) Super-resolution image reconstruction method based on self-sample enhancement
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
CN111402138A (en) Image super-resolution reconstruction method of supervised convolutional neural network based on multi-scale feature extraction fusion
CN106447609A (en) Image super-resolution method based on depth convolutional neural network
CN110288529B (en) Single image super-resolution reconstruction method based on recursive local synthesis network
CN113222812A (en) Image reconstruction method based on information flow reinforced deep expansion network
CN112085655A (en) Face super-resolution method based on dense residual attention face prior network
CN109272450B (en) Image super-resolution method based on convolutional neural network
CN111199237A (en) Attention-based convolutional neural network frequency division feature extraction method
CN108764287B (en) Target detection method and system based on deep learning and packet convolution
CN116029905A (en) Face super-resolution reconstruction method and system based on progressive difference complementation
CN115660979A (en) Attention mechanism-based double-discriminator image restoration method
CN115456900A (en) Improved transform-based Qinhong tomb warrior fragment denoising method
CN115272673A (en) Point cloud semantic segmentation method based on three-dimensional target context representation
CN114863094A (en) Industrial image region-of-interest segmentation algorithm based on double-branch network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant