CN115063298A - PET super-resolution method of multiple super-resolution residual error network - Google Patents

PET super-resolution method of multiple super-resolution residual error network Download PDF

Info

Publication number
CN115063298A
CN115063298A CN202210988095.7A CN202210988095A CN115063298A CN 115063298 A CN115063298 A CN 115063298A CN 202210988095 A CN202210988095 A CN 202210988095A CN 115063298 A CN115063298 A CN 115063298A
Authority
CN
China
Prior art keywords
resolution
super
residual
image
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210988095.7A
Other languages
Chinese (zh)
Inventor
张弓
李学俊
王华彬
苏进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Canada Institute Of Health Engineering Hefei Co ltd
Original Assignee
China Canada Institute Of Health Engineering Hefei Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Canada Institute Of Health Engineering Hefei Co ltd filed Critical China Canada Institute Of Health Engineering Hefei Co ltd
Priority to CN202210988095.7A priority Critical patent/CN115063298A/en
Publication of CN115063298A publication Critical patent/CN115063298A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a PET super-resolution method of a multi-super-resolution residual error network, belonging to the field of medical image processing. The method comprises the steps of firstly, acquiring a training set and a testing set; designing a multi-level jumper connection residual block, and constructing a multi-super-resolution residual block module; step three, training a deep residual error network by adopting a random gradient descent method to obtain a super-resolution reconstruction model of the network; inputting the low-resolution image into a super-resolution reconstruction model to obtain a predicted residual characteristic value; step five, combining the residual image and the low-resolution image into a high-resolution image; and step six, evaluating the network by using the image quality evaluation index. The present invention proposes a multiple super-resolution residual network to solve this problem. Most quantitative and qualitative evaluation index data show that the model can better reconstruct the details and texture of the image, and show that the performance of the algorithm is superior to the most advanced method in the prior art.

Description

PET super-resolution method of multiple super-resolution residual error network
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to a PET super-resolution method of a multi-super-resolution residual error network.
Background
In recent years, application of deep learning to the field of image processing has been remarkably studied. In many tasks, features obtained through deep learning have proven to be more expressive than features constructed by traditional methods. Dong et al propose a super-resolution algorithm using convolutional neural network (SRCNN) and apply it to the field of image super-resolution reconstruction. The network has a simple structure, a good super-resolution reconstruction effect and a large convolution kernel, and the up-sampling adopts the traditional bicubic interpolation; these structural features greatly affect the operating speed of the network. Subsequently, Dong et al proposed an accelerated super resolution convolutional neural network (FSRCNN) based on SRCNN. The structure of FSRCNN is deeper than that of SRCNN, and is replaced by deconvolution bicubic interpolation. The speed of the FSRCNN is obviously higher than that of the SRCNN, and the super-resolution effect of the image is improved. The FSRCNN succeeds in image super-resolution reconstruction, but the number of convolution layers is small, and the characteristic information of adjacent convolution layers lacks correlation; therefore, it is difficult to extract deep information of the image, resulting in a poor super-resolution reconstruction effect. Since the fast super-resolution convolutional neural network (FSRCNN) algorithm has less correlation between convolutional layers and lacks of feature information of adjacent convolutional layers, it is difficult to be used to extract depth information of an image, and the speed effect of super-resolution image reconstruction is not good. To solve this problem, we propose a multiple super-resolution residual network (MSRRN) for medical image super-resolution.
Through search, Chinese patent publication No. CN109978763A, the subject names are: an image super-resolution reconstruction algorithm based on a jump connection residual error network is disclosed by the application date: 2019.03.01, the method comprising the steps of: selecting a training data set, and carrying out bicubic interpolation on the low-resolution image; constructing a specific structure of the network and making a network training strategy; extracting details of the interpolated image; the dimension of the total characteristic is reduced, and the single-pixel receptive field is widened; carrying out multiple iterations on the network training until the maximum iteration times is reached; and finishing the final high-resolution image reconstruction in a global residual learning mode. However, the application is only applicable to PET images at present, a comparison algorithm of natural images is not performed, and the identification image of the application is still different from the PET original image.
Disclosure of Invention
1. Technical problem to be solved by the invention
In view of the problems that in the existing PET imaging process, due to the limitation of hardware equipment and nuclide dose, a generated image is often unclear, the resolution is low, and the edge part is fuzzy, the invention provides a PET super-resolution method of a multi-super-resolution residual error network, which can clearly identify the PET image and improve the image resolution.
2. Technical scheme
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention relates to a PET super-resolution method of a multi-super-resolution residual error network, which comprises the following steps,
acquiring high-resolution and low-resolution PET images as a training set, and taking part of the low-resolution images as a test set;
designing a multi-stage jumper connection residual block in a fast super-resolution convolutional neural network, and constructing a multi-super-resolution residual block module;
training a deep residual error network by adopting a random gradient descent method to obtain a super-resolution reconstruction model of the network;
inputting the low-resolution image into a super-resolution reconstruction model to obtain a predicted residual characteristic value;
step five, combining the residual image and the low-resolution image into a high-resolution image;
and step six, evaluating the network by using the image quality evaluation index.
3. Advantageous effects
Compared with the prior art, the technical scheme provided by the invention has the following remarkable effects:
(1) according to the PET super-resolution method of the multi-super-resolution residual network, the improved residual block is connected through multi-level skip to obtain the predicted residual image, and then the residual image and the low-resolution image are combined into the high-resolution image, so that the image information is better extracted, and the clearer high-resolution image is obtained.
(2) According to the PET super-resolution method of the multi-super-resolution residual error network, the adjacent residual error information of the feature vectors of the convolution layer inside the difference block can be fully utilized, so that more feature information can be extracted by the difference block, more picture features can be obtained, and a high-resolution image is clearer.
(3) According to the PET super-resolution method of the multi-super-resolution residual error network, the same residual error function is used as an activation function, so that parameters in the network are reduced, the complexity of network calculation is reduced, the network training speed is increased, and the network can be trained more quickly.
Drawings
FIG. 1(a) is a block diagram of the MSRRB structure of the present invention;
FIG. 1(b) is a diagram of two adjacent multi-level jumper connected remaining blocks;
fig. 2 is a diagram of a MSRRN network architecture of the present application;
fig. 3(a) is a PET raw map based on VDSR reconstruction;
FIG. 3(b) is an identification diagram for FIG. 3(a) based on the Bicubic method;
FIG. 3(c) is an identification graph for FIG. 3(a) based on the SRCNN method;
FIG. 3(d) is an identification diagram for FIG. 3(a) based on the MSRRN method;
FIG. 4 is a block flow diagram of the steps of the present invention.
Detailed Description
For a further understanding of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Examples
With reference to fig. 1(a), fig. 1(b), fig. 2, fig. 3(a), fig. 3(b), fig. 3(c), fig. 3(d), and fig. 4, the method for super-resolution PET of multiple super-resolution residual networks according to the present embodiment includes the following steps:
step one, acquiring high-resolution and low-resolution PET images as a training set and a test set:
to fully utilize the data set, data expansion of the BSD500 and T91 training set images using MATLAB was performed. Both scaling and rotation methods are used to add data. Each image is scaled by 0.7, 0.8 and 0.9, a picture is obtained after scaling by 0.7, a picture is obtained after scaling by 0.8 and a picture is obtained after scaling by 0.9. In addition, each image is rotated by 90 °, 180 ° and 270 °, respectively, to obtain one picture. The original images of the BSD500 and T91 are first Gaussian filtered 1.5 4 And (5) sub-sampling and bicubic interpolation to obtain an LR image. The LR training image is then segmented into a set of 96 x 96 HR image blocks of step size 12, resulting in 9456 images. For parameter initialization in the network, which are randomly generated from a gaussian distribution with a mean of zero and a standard deviation of 0.001, the obtained high resolution image blocks are first blurred as much as possible using a 7 x 7 gaussian kernel with a standard deviation of 1.6 to simulate a naturally blurred image.
Designing a multi-stage jumper connection residual block in a fast super-resolution convolutional neural network (FSRCNN) to construct a multi-super-resolution residual block (MSRRB) module:
as shown in fig. 2, each super-resolution residual block (SRRB) is formed by two multi-stage jumper residual blocks, each multi-stage jumper residual block (MSRRB) is formed by three super-resolution residual blocks, the construction process is embodied in the code, and the fast super-resolution convolutional neural network (FSRCNN) is built layer by layer without jumper connections λ and β according to the diagram.
A multi-super-resolution residual block (MSRRB) is designed, the MSRRN consisting of 8 MSRRBs and an upsampling block. The MSRRB extracts LR image features by using three super-resolution residual blocks (SRRB); the multi-super-resolution residual block (MSRRB) is composed of 3 sub-residual blocks and beta-jump connections, each sub-residual block is composed of a convolution layer and a beta-jump connection, and the up-sampling module is composed of a sub-pixel convolution layer.
The residual block function is defined as:
y=F(x,{W i })+x (1)
where y is the output vector of the residual block, x is the input vector of the residual block, W i Represents the weight of the filter at layer i, and f (x) represents the remaining output blocks.
Before the activation function of the second layer, the function F (x, { W) i H) is the residual map to be learned, in which the input vector x and the residual function f (x) must be equal in size. Assuming that the convolutional layer behind the deep neural network is an identity mapping function denoted H (x), the training of the deep neural network is simplified to learn an identity mapping function. If it is difficult to fit directly into the convolutional layer of one identity mapping function h (x) = x, but the deep neural network is designed as h (x) = f (x) + x, then the training of the deep neural network can be translated into the remaining learning functions f (x) = h (x) -x; as long as f (x) =0, it corresponds to the accessory identification mapping function h (x) = x. Under the same calculation conditions, the preceding convolutional layer is easier to adapt to the residual function f (x) = h (x) -x than the identity function h (x) = x.
One convolution layer is connected with a lambda jumper to form a sub-residual block, and two sub-residual blocks are connected with a beta jumper to form a super-resolution residual block (SRRB).
The operation mode of the multi-stage jumper connection module is as follows: assuming that the input multi-level skip residual block is connected by x, the output obtained by the first sub-residual block is y 1 The output obtained by the second sub-residual block is y 2 The output obtained by the multiple super-resolution residual module is y 3 If so, the output is:
y 1 =W 1 (x)+λx (2)
y 2 =W 2 (y 1 )+λy 1 =W 2 (W 1 (x)+λx)+λW 1 (x)+λ 2 x (3)
y 3 =y 21 x=W 2 (W 1 (x)+λx)+λW 1 (x)+(λ 21 )x (4)
from the output y 3 It can be known that whenWhen the input x is connected in a multi-stage jump connection mode, the vector W of the first sub-residual block is obtained 2 (W 1 (x) Also obtains the output value lambay 1 . Therefore, the related information of the feature vector of the convolution layer in the residual block can be extracted through the residual block connected in a multi-stage jumping mode. Wherein the output y of the third sub-residual block 4 Comprises the following steps:
y 4 =W 3 (y 3 )+λy 3 =W 3 (y 21 x)+λ(W 2 (y 1 ))+λ 2 W 1 (x)+(λ 3 +λβ 1 )x (5)
output y of the fourth sub-residual block 5 Comprises the following steps:
y 5 =W 4 (y 4 )+λy 4 =W 4 (y 4 )+λ(W 3 (y 21 x))+λ 2 W 2 (y1)+λ 3 W 1 (x)+(λ 42 β 1 )x (6)
then the output y of the second multi-stage jumper connection of the remaining blocks 6 Comprises the following steps:
y 6 =y 5 +λy 3 =W 4 (y 4 )+λ(W 3 (y 21 x))+(λ 21 )(W 2 (y 1 ))+(λ 3 +λβ 1 )W 1 (x)+(λ 22 )x (7)
wherein, W 1 -W 4 Respectively representing the weights, beta, of the filters in the first to fourth layers 1 For the jump-connection parameter, beta 2 For another jump-connection parameter, said λ, λ 2 、λ 3 、λ 4 Are the first, second, third and fourth jumper connection parameters, respectively.
From the output y 6 It can be determined that the output value of each convolutional layer of two adjacent multilayer jump-connected residual blocks convolves the feature vector of the previous convolutional layer, so that the adjacent residual information of the convolutional layer feature vector in the differential block is fully utilized by the convolutional layer; therefore, the residual block can extract more feature information.
Step three, training a depth residual error network connected by a plurality of hop lines by adopting a random gradient descent (SGD) method to obtain a super-resolution reconstruction model of the network:
the method uses a Mean Square Error (MSE) function as a loss function to estimate a network parameter A, wherein the MSE function is as follows:
Figure DEST_PATH_IMAGE001
(8)
where the parameter θ is estimated by minimizing the loss of the reconstructed image and the corresponding true HR image, G (Y) i θ) is the reconstructed image, X i Is an HR image sample, Y i Is the corresponding LR image sample, and n is the number of training samples. The method adopts an SGD algorithm to optimize the training of the MSRRN. The expression of the parameter θ of the SGD algorithm is:
Figure 493460DEST_PATH_IMAGE002
(9)
Figure DEST_PATH_IMAGE003
(10)
Figure 690086DEST_PATH_IMAGE004
(11)
wherein t is iteration times, l is convolution layer number, mu is the weight of the previous iteration, mu belongs to [0,1], and eta is learning rate. The activation function used by the method is a ReLu function, but the use of the function results in excessive parameters, and in the calculation process of the deep residual error network of the multi-stage jumper connection, the excessive parameters increase the complexity of network calculation, which results in the slow network training speed, so that the same residual error function is adopted to replace the ReLu function as the activation function in the embodiment:
F(x)=max(0,W i x+B i )
wherein i represents the number of layers, W i Is the weight of a 64X 3X 64 filter at the i-th layer, B i Is a 64 dimensional deviation. NetThe filter size of the network input convolutional layer is 1 × 3 × 3 × 64, and the filter size of the network output convolutional layer is 64 × 3 × 3 × 1. To ensure that the feature maps after convolution calculation and the feature maps of the input convolutional layers remain the same size, the move step size of all convolutional layers of the network is set to 1, and the padding is set to 1. Jump connection parameter lambda =0.1, beta of network residual block 1 =0.1。
Step four: inputting the low-resolution image into a MSRRN super-resolution reconstruction model, and obtaining a predicted residual characteristic value by a residual block: as shown in fig. 2, this step is a black box process, which is embodied in the code, + represents a combination, and the concat () function in python is used for residual concatenation combination, and a high-resolution image is finally obtained through upsampling and convolution.
Step five: combining the residual image and the low-resolution image into a high-resolution image;
step six: and evaluating the network by using the image quality evaluation index.
The results of the experiment are shown in table 1:
TABLE 1 image quality evaluation index value
Figure 303470DEST_PATH_IMAGE005
As shown in table 1, the average PSNR and SSIM values obtained by the MSRRN algorithm are greatly improved when the amplification factor is 2.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (9)

1. A PET super-resolution method of multi-super-resolution residual error network is characterized in that the method comprises the following steps,
acquiring high-resolution and low-resolution PET images as a training set, and taking part of the low-resolution images as a test set;
designing a multi-stage jumper connection residual block in a fast super-resolution convolutional neural network, and constructing a multi-super-resolution residual block module;
training a deep residual error network by adopting a random gradient descent method to obtain a super-resolution reconstruction model of the network;
inputting the low-resolution image into a super-resolution reconstruction model to obtain a predicted residual characteristic value;
step five, combining the residual image and the low-resolution image into a high-resolution image;
and step six, evaluating the network by using the image quality evaluation index.
2. The method for super-resolution PET of multiple super-resolution residual error networks according to claim 1, wherein in the first step, MATLAB is used to perform data expansion on the BSD500 and T91 training set images, and two methods of scaling and rotation are used to increase data; gaussian filtering of the data set 1.5 4 After secondary sampling and bicubic interpolation sampling, the obtained high-resolution image block is blurred by using Gaussian core, and a low-resolution image is obtained.
3. The PET super resolution method of multiple super resolution residual error network according to claim 1 or 2, wherein in the second step, the multiple super resolution residual block is composed of 3 sub-residual blocks and β -jump connection, each sub-residual block is composed of a convolution layer and β -jump connection; the super-resolution residual block and the up-sampling module jointly form a multiple super-resolution residual block; the up-sampling module consists of a sub-pixel convolution layer;
the function of the residual block is
y=F(x,{W i })+x(1)
Where y is the output vector of the residual block, x is the input vector of the residual block, W i Represents the weight of the filter at the i-th layer, and F (x) represents the remaining output blocks;the sub-residual block is formed by connecting a convolution layer and a lambda jumper, and the two sub-residual blocks are connected with a beta jumper to form a super-resolution residual block.
4. The PET super resolution method of claim 3, wherein the multi-level skip module operates by inputting a connection x of multi-level skip residual blocks to obtain a first sub-residual block output y 1 The second sub-residual block outputs y 2 Super resolution residual module output y 3 Then obtain an output of
y 1 =W 1 (x)+λx(2)
y 2 =W 2 (y 1 )+λy 1 =W 2 (W 1 (x)+λx)+λW 1 (x)+λ 2 x(3)
y 3 =y 21 x=W 2 (W 1 (x)+λx)+λW 1 (x)+(λ 21 )x(4)
Output y of the third sub-residual block 4 Is composed of
y 4 =W 3 (y 3 )+λy 3 =W 3 (y 21 x)+λ(W 2 (y 1 ))+λ 2 W 1 (x)+(λ 3 +λβ 1 )x (5)
Output y of the fourth sub-residual block 5 Is composed of
y 5 =W 4 (y 4 )+λy 4 =W 4 (y 4 )+λ(W 3 (y 21 x))+λ 2 W 2 (y 1 )+λ 3 W 1 (x)+(λ 42 β 1 )x (6)
Output y of the second multi-stage jumper connection of the remaining blocks 6 Is composed of
y 6 =y 5 +λy 3 =W 4 (y 4 )+λ(W 3 (y 21 x))+(λ 21 )(W 2 (y 1 ))+(λ 3 +λβ 1 )W 1 (x)+(λ 22 )x (7)
Wherein, W 1 -W 4 Respectively representing the weights, beta, of the filters in the first to fourth layers 1 For the jump-connection parameter, beta 2 For another jump-connection parameter, said λ, λ 2 、λ 3 、λ 4 Are the first, second, third and fourth jumper connection parameters, respectively.
5. The PET super resolution method of claim 4, wherein in the third step, the stochastic gradient descent method uses MSE as a loss function to estimate the network parameter A, and the MSE is a function of mean square error
Figure 854179DEST_PATH_IMAGE001
(8)
Wherein the parameter θ is estimated by minimizing the loss of the reconstructed image and the corresponding true HR image, G (Y) i θ) is the reconstructed image, X i Is an HR image sample, Y i Is the corresponding LR image sample, and n is the number of training samples.
6. The PET super resolution method of multiple super resolution residual error networks according to claim 5, wherein the expression of the parameter θ of the stochastic gradient descent method is:
Figure 411062DEST_PATH_IMAGE002
(9)
Figure 146937DEST_PATH_IMAGE003
(10)
Figure 204892DEST_PATH_IMAGE004
(11)
wherein t is iteration times, l is convolution layer number, mu is the weight of the previous iteration, mu belongs to [0,1], and eta is learning rate.
7. The PET super resolution method of multiple super resolution residual error networks in claim 6, wherein the same residual error function F (x) = max (0, W) is used i x+B i ) Replacing ReLu function as activation function of the random gradient descent method, wherein i represents number of layers, and W i Is the filter weight of the i-th layer, B i Is a deviation.
8. The PET super resolution method of claim 7, wherein in the depth residual network, the moving step size of all convolutional layers is set to 1, and the padding is set to 1; parameters λ =0.1, β of the jumper connection of the network residual block 1 =0.1。
9. The method for super-resolution PET of multiple super-resolution residual error networks according to claim 8, wherein in the first step, each image is scaled once by 0.7, once by 0.8, and once by 0.9, and one picture is obtained each time, and one picture is obtained by rotating each image by 90 °, one picture is obtained by rotating 180 °, and one picture is obtained by rotating 270 °, so as to realize data amplification.
CN202210988095.7A 2022-08-17 2022-08-17 PET super-resolution method of multiple super-resolution residual error network Pending CN115063298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210988095.7A CN115063298A (en) 2022-08-17 2022-08-17 PET super-resolution method of multiple super-resolution residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210988095.7A CN115063298A (en) 2022-08-17 2022-08-17 PET super-resolution method of multiple super-resolution residual error network

Publications (1)

Publication Number Publication Date
CN115063298A true CN115063298A (en) 2022-09-16

Family

ID=83208545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210988095.7A Pending CN115063298A (en) 2022-08-17 2022-08-17 PET super-resolution method of multiple super-resolution residual error network

Country Status (1)

Country Link
CN (1) CN115063298A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
毕晓君等: "基于生成对抗网络的机载遥感图像超分辨率重建", 《智能***学报》 *
赵小强 等: "多级跳线连接的深度残差网络超分辨率重建", 《电子与信息学报》 *

Similar Documents

Publication Publication Date Title
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
CN111105352B (en) Super-resolution image reconstruction method, system, computer equipment and storage medium
CN109146784B (en) Image super-resolution reconstruction method based on multi-scale generation countermeasure network
CN108122197B (en) Image super-resolution reconstruction method based on deep learning
CN111598892B (en) Cell image segmentation method based on Res2-uneXt network structure
CN111062872A (en) Image super-resolution reconstruction method and system based on edge detection
Liu et al. An attention-based approach for single image super resolution
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN110675321A (en) Super-resolution image reconstruction method based on progressive depth residual error network
CN109523470B (en) Depth image super-resolution reconstruction method and system
CN105844630A (en) Binocular visual image super-resolution fusion de-noising method
CN112862689B (en) Image super-resolution reconstruction method and system
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN112580473B (en) Video super-resolution reconstruction method integrating motion characteristics
CN111932461A (en) Convolutional neural network-based self-learning image super-resolution reconstruction method and system
CN112967185A (en) Image super-resolution algorithm based on frequency domain loss function
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding
CN115797176A (en) Image super-resolution reconstruction method
CN115526777A (en) Blind over-separation network establishing method, blind over-separation method and storage medium
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
CN111414988A (en) Remote sensing image super-resolution method based on multi-scale feature self-adaptive fusion network
CN109272450A (en) A kind of image oversubscription method based on convolutional neural networks
Yang et al. RSAMSR: A deep neural network based on residual self-encoding and attention mechanism for image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220916

RJ01 Rejection of invention patent application after publication