CN111161158A - Image restoration method based on newly-formed network structure - Google Patents

Image restoration method based on newly-formed network structure Download PDF

Info

Publication number
CN111161158A
CN111161158A CN201911217769.8A CN201911217769A CN111161158A CN 111161158 A CN111161158 A CN 111161158A CN 201911217769 A CN201911217769 A CN 201911217769A CN 111161158 A CN111161158 A CN 111161158A
Authority
CN
China
Prior art keywords
network
image
training
layer
discrimination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911217769.8A
Other languages
Chinese (zh)
Other versions
CN111161158B (en
Inventor
王敏
林竹
岳炜翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201911217769.8A priority Critical patent/CN111161158B/en
Publication of CN111161158A publication Critical patent/CN111161158A/en
Application granted granted Critical
Publication of CN111161158B publication Critical patent/CN111161158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image restoration method based on a new network structure, which comprises the following steps: inputting a complete three-channel image and any missing image corresponding to the image; preprocessing the image; deploying a generating network based on SE-ResNet; deploying a discrimination network; sending the missing image into a generation network to obtain a repair image; updating the parameters of the generated network by repairing the picture and the original picture; the restored picture and the original picture are simultaneously sent into a discrimination network to train the discrimination network; performing joint training to generate a network and a judgment network until the whole training set is traversed for a plurality of times, and ending the training stage; and randomly selecting missing images in the test set, and generating a network through the training to obtain a repaired image. According to the invention, the SE-ResNet structure is added into the generator, so that the parameter quantity is obviously reduced, the running speed is improved, the gradient disappearance phenomenon is reduced, the network characteristic utilization is enhanced, the repairing time is shorter, and the repaired image is clearer and more vivid.

Description

Image restoration method based on newly-formed network structure
Technical Field
The invention belongs to the technical field of computer vision and image processing, and particularly relates to an image restoration method based on a newly-formed network structure.
Background
Image restoration is an image processing technology that repairs missing information in an image or removes specific information in the image by using undamaged information in the image on the premise of ensuring that the quality of the image and its natural effect are not damaged. The core challenge of this technique is to synthesize visually realistic and semantically reasonable pixels for the missing regions to stay consistent with the existing pixels. Image restoration is of great practical significance, and has many applications, particularly in the protection of works of art, the restoration of old photos, and image-based rendering and computer photography.
At present, a plurality of image restoration methods exist, wherein the image restoration method based on the deep learning method has a remarkable effect. The network designed in large scale by the existing method can not fully extract and utilize the characteristics of the image, and the definition and the fidelity of the generated image are not ideal.
Therefore, a new technical solution is needed to solve this problem.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, an image restoration method based on a newly formed network structure is provided.
The technical scheme is as follows: in order to achieve the above object, the present invention provides an image repairing method based on a newly formed network structure, comprising the following steps:
s1: inputting a complete three-channel image and any missing image corresponding to the image;
s2: preprocessing the image obtained in step S1, and cutting the image into a fixed size;
s3: deploying a generating network based on SE-ResNet; deploying a discrimination network;
s4: sending the missing image processed in the step S2 to a generation network to obtain a repair picture;
s5: updating the parameters of the generated network through the repair picture and the original picture obtained in the step S4;
s6: repeating step S5 until the entire training set is traversed several times;
s7: the repair picture obtained through the step S4 and the original picture are simultaneously sent to a discrimination network to train the discrimination network;
s8: repeating step S7 until the entire training set is traversed several times;
s9: performing joint training to generate a network and a judgment network until the whole training set is traversed for a plurality of times, and ending the training stage;
s10: and randomly selecting missing images in the test set, and generating a network through the training to obtain a repaired image.
Further, in step S1, the arbitrary missing image needs to arbitrarily generate a mask M having the same size as the original image and consisting of only 0 and 1, and the product of the original image and the arbitrary mask M is the input missing image.
Further, the deployment of the network generated in step S3 specifically includes:
A) an encoder part in a generating network firstly passes through a convolutional layer with the convolutional kernel size of 5x5 and the step length of 1, the output channel number is c, and then three layers of residual blocks based on SE-ResNet are arranged, wherein the first residual block comprises 3 sub-residual blocks, the second residual block comprises 4 sub-residual blocks, the third residual block comprises 6 sub-residual blocks, and for each sub-residual block, two layers of convolutional layers with the convolutional kernel size of 3x3 are included; each residual block performs downsampling operation on the image in the first layer convolution of the first sub-residual block, and channels are doubled, wherein the final channel number is 4 c; then, through convolutional layers with the sizes of two layers of convolutional kernels of 3x3 and the step length of 1, the number of output channels is 4c, and finally through convolutional layers with the sizes of 4 expansion convolutional layers and the step length of 1, the sizes of the convolutional kernels are 3x3, the number of the output channels is 4c, so that the image is reduced to the original 1/4, and the number of the channels is 4 c; the feature map contains rich feature information, and the feature map is decoded by a decoder;
B) the decoder part in the generation network firstly passes through a layer of deconvolution layer with the convolution kernel size of 4x4, the step length of 2 and the output channel of c/2, and then passes through a layer of convolution layer with the convolution kernel size of 3x3, the step length of 1 and the output channel of c/2; the convolution layers passing next are the same as above except that the output channel of the former deconvolution layer is c/4, the output channel of the latter ordinary convolution layer is also c/4, so far, the image is restored to be the same size as the original image, but the number of the channels is c/4; then, two convolution layers with convolution kernel size of 3x3, step length of 1 and channel number of c/8 and 3 respectively are passed through, and finally one sigmoid layer is passed through. All convolutions except the last convolution above are followed by the BatchNorm and ReLU operations.
Further, the discrimination network in step S3 is divided into a local discrimination network and a global discrimination network, and the deployment specifically includes:
a) the local discrimination network is used for discriminating the true and false of the missing local generation and consists of five convolution layers and a full-connection layer, the sizes and the step lengths of the first five convolution layers are respectively 5 and 2, the number of output channels is c, 2c, 4c, 8c and 8c in sequence, and the BatchNorm and ReLU operation is carried out on each convolution layer; the output of the full connection layer is 1024, and the output is carried out after one layer of ReLU, namely the final output is a 1024-dimensional vector;
b) the global discrimination network is used for discriminating the truth of global generation, is the same as the local discrimination network and finally outputs a 1024-dimensional vector;
c) and splicing the two 1024-dimensional vectors to obtain a 2048-dimensional vector.
Further, the step S5 is executed by calculating the L2 distance between the repair picture and the original picture as a reconstruction loss function of the generated network:
Figure BDA0002299953460000021
the gradient update is performed using an adapelta optimizer.
Further, the training of the discriminant network in step S7 specifically includes the following steps:
s7-1: fixing the parameters of the generation network, generating a random missing image, and sending the image into the trained generation network to obtain a repaired image G (x)0);
S7-2: sending two groups of image pairs into a discrimination network, wherein the first group of image pairs are an original image x and a repaired image G (x)0) Combining with the original picture x, only taking the part corresponding to the original missing image and the part of the original image which is not missing for splicingAccess to a discriminating network, i.e. the first set of inputs to the discriminating network is x, x (1-M) + G (x)0) M; the second set of image pairs is the original image portion M x and the restored image portion M G (x)0));
S7-3: constructing a loss function LD=logD(g1)+log(1-D(g2)),g1And g2Respectively, step S7-2 is shown to result in two inputs of the two sets, resulting in two losses LrealAnd LfakeFinally, the loss of the network is judged to be (L)real+Lfake) α/2, gradient update using adaelta optimizer.
Further, the specific steps of jointly training the generation network and the discrimination network in step S9 are as follows:
s9-1: training a discrimination network by using the method of step S7;
s9-2: training the generation network, training the generation network using the joint discrimination network in addition to the training by the method of step S5, and training L obtained in the third step of step S7fakeTraining the generating network as an aid to negation, i.e. Ladv=-LfakeSo that the loss function of the generator is LG=Lrec+α*LadvGradient updates are performed using an adapelta optimizer, which iterates through the entire training set several times.
The present invention trains a convolutional neural network consisting of an encoder and a decoder to predict the missing part of the pixels. The encoder compresses and extracts image features by layer-by-layer convolution, and the decoder restores the compressed image features and generates pixels of the missing part. In order to obtain a clear restored image, the semantic features of the image are fully learned.
Has the advantages that: compared with the prior art, the SE-ResNet structure is added into the generator, so that the parameter quantity is obviously reduced, the running speed is improved, the gradient disappearance phenomenon is reduced, the network characteristic utilization is enhanced, the repairing time is shorter, and the repaired image is clearer and more vivid.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a formation network structure;
FIG. 3 is a schematic diagram of a discriminating network structure;
FIG. 4 is a test set test effect diagram.
Detailed Description
The invention is further elucidated with reference to the drawings and the embodiments.
In this embodiment, the image repairing method based on the new network structure provided by the present invention is applied to image repairing, and the image repairing method is adopted to repair an image by taking a face data set in CelebA as an example, as shown in fig. 1, the specific steps are as follows:
s1: inputting a complete three-channel image and any missing image corresponding to the image, wherein the any missing image needs to generate a mask image M which is only composed of 0 and 1 and has the same size as the original image, and the product of the original image and the any mask image M is the input missing image.
S2: preprocessing the image obtained in step S1, and cutting the image into a fixed size;
s3: deploying a generating network G based on SE-ResNet to obtain a generating network structure shown in FIG. 2, wherein the specific deployment comprises the following steps A and B:
A) an encoder part in a generating network firstly passes through a convolutional layer with the convolutional kernel size of 5x5 and the step length of 1, the output channel number is c, and then three layers of residual blocks based on SE-ResNet are arranged, wherein the first residual block comprises 3 sub-residual blocks, the second residual block comprises 4 sub-residual blocks, the third residual block comprises 6 sub-residual blocks, and for each sub-residual block, two layers of convolutional layers with the convolutional kernel size of 3x3 are included; each residual block performs downsampling operation on the image in the first layer convolution of the first sub-residual block, and channels are doubled, wherein the final channel number is 4 c; then, through convolutional layers with the sizes of two layers of convolutional kernels of 3x3 and the step length of 1, the number of output channels is 4c, and finally through convolutional layers with the sizes of 4 expansion convolutional layers and the step length of 1, the sizes of the convolutional kernels are 3x3, the number of the output channels is 4c, so that the image is reduced to the original 1/4, and the number of the channels is 4 c; the feature map contains rich feature information, and the feature map is decoded by a decoder;
B) the decoder part in the generation network firstly passes through a layer of deconvolution layer with the convolution kernel size of 4x4, the step length of 2 and the output channel of c/2, and then passes through a layer of convolution layer with the convolution kernel size of 3x3, the step length of 1 and the output channel of c/2; the convolution layers passing next are the same as above except that the output channel of the former deconvolution layer is c/4, the output channel of the latter ordinary convolution layer is also c/4, so far, the image is restored to be the same size as the original image, but the number of the channels is c/4; then, two convolution layers with convolution kernel size of 3x3, step length of 1 and channel number of c/8 and 3 respectively are passed through, and finally one sigmoid layer is passed through. All convolutions except the last convolution above are followed by the BatchNorm and ReLU operations.
In the embodiment, SE-ResNet added to the generated network is to add an SE (squeeze-and-excitation) module to a ResNet residual block, give an input x with a characteristic channel number of c _1, and obtain a characteristic with a characteristic channel number of c _2 through a series of general transformations such as convolution. The previously derived features are recalibrated by three operations. Firstly, the Squeeze operation performs feature compression along the spatial dimension, each two-dimensional feature channel is changed into a real number, the real number has a global receptive field in a certain course, and the output dimension is matched with the number of input feature channels. It characterizes the global distribution of responses over the eigen-channels and makes it possible to obtain a global receptive field also for layers close to the input. The second is the Excitation operation, which is a mechanism similar to the gate in the recurrent neural network. A weight is generated for each feature channel by a parameter w that is learned to explicitly model the correlation between feature channels. And finally, a Reweight operation is carried out, the weight of the output of the Excitation is regarded as the importance of each feature channel after feature selection, and then the feature channels are weighted to the previous feature channel by channel through multiplication, so that the original feature is recalibrated in the channel dimension.
S4: deploying a discrimination network D to obtain a discrimination network structure shown in FIG. 3, wherein the discrimination network is divided into a local discrimination network and a global discrimination network, and the deployment specifically comprises:
a) the local discrimination network is used for discriminating the true and false of the missing local generation and consists of five convolution layers and a full-connection layer, the sizes and the step lengths of the first five convolution layers are respectively 5 and 2, the number of output channels is c, 2c, 4c, 8c and 8c in sequence, and the BatchNorm and ReLU operation is carried out on each convolution layer; the output of the full connection layer is 1024, and the output is carried out after one layer of ReLU, namely the final output is a 1024-dimensional vector;
b) the global discrimination network is used for discriminating the truth of global generation, is the same as the local discrimination network and finally outputs a 1024-dimensional vector;
c) and splicing the two 1024-dimensional vectors to obtain a 2048-dimensional vector.
S5: and sending the preprocessed missing image in the step S2 to a generation network to obtain a repaired image.
S6: updating the parameters of the generated network by the repair picture and the original picture obtained in step S5:
by calculating the L2 distance of the repair picture and the original picture as a reconstruction loss function of the generated network:
Figure BDA0002299953460000051
the gradient update is performed using an adapelta optimizer.
S7: repeating step S6 until the entire training set is traversed several times;
s8: the restored picture obtained in step S5 and the original picture are simultaneously sent to the discriminant network to train the discriminant network, and the specific training process is as follows:
s8-1: fixing the parameters of the generation network, generating a random missing image, and sending the image into the trained generation network to obtain a repaired image G (x)0);
S8-2: sending two groups of image pairs into a discrimination network, wherein the first group of image pairs are an original image x and a repaired image G (x)0) Combining with original image x, splicing the original missing image and the original image into a discrimination network, wherein the first group of input of the discrimination network is x, x (1-M))+G(x0) M; the second set of image pairs is the original image portion M x and the restored image portion M G (x)0));
S8-3: constructing a loss function LD=logD(g1)+log(1-D(g2)),g1And g2Respectively, step S7-2 is shown to result in two inputs of the two sets, resulting in two losses LrealAnd LfakeFinally, the loss of the network is judged to be (L)real+Lfake)*α/2, gradient update was performed using an Adadelta optimizer.
S9: repeating step S8 until the entire training set is traversed several times;
s10: combining the steps S3 and S4 to train and generate the network and judge the network, the concrete process is as follows:
s10-1: training a discrimination network by using the method of step S8;
s10-2: training the generation network, training the generation network using the joint discrimination network in addition to the training by the method of step S6, and training L obtained in the third step of step S8fakeTraining the generating network as an aid to negation, i.e. Ladv=-LfakeSo that the loss function of the generator is LG=Lrec+α*LadvThe gradient update is performed using an adapelta optimizer.
S11: the training of step S10 is repeated until the entire training set is traversed several times, and the training phase ends.
S12: and randomly selecting missing images in the test set, and generating a network through the training to obtain a repaired image.
In this embodiment, the test result diagram of the test set shown in fig. 4 is obtained through the above method, where the first row and the fourth row in fig. 4 are original images, the second row and the fifth row are images to be restored, respectively, and the third row and the sixth row are images after being restored, respectively, it can be seen that the restored images have better definition, and compared with the original images, the restored images have better fidelity.

Claims (6)

1. An image restoration method based on a newly formed network structure is characterized by comprising the following steps:
s1: inputting a complete three-channel image and any missing image corresponding to the image;
s2: preprocessing the image obtained in the step S1;
s3: deploying a generating network based on SE-ResNet; deploying a discrimination network;
s4: sending the missing image processed in the step S2 to a generation network to obtain a repair picture;
s5: updating the parameters of the generated network through the repair picture and the original picture obtained in the step S4;
s6: repeating step S5 until the entire training set is traversed several times;
s7: the repair picture obtained through the step S4 and the original picture are simultaneously sent to a discrimination network to train the discrimination network;
s8: repeating step S7 until the entire training set is traversed several times;
s9: performing joint training to generate a network and a judgment network until the whole training set is traversed for a plurality of times, and ending the training stage;
s10: and randomly selecting missing images in the test set, and generating a network through the training to obtain a repaired image.
2. The method according to claim 1, wherein the step S3 of deploying the generated network is specifically:
A) an encoder part in a generating network firstly passes through a convolutional layer with the convolutional kernel size of 5x5 and the step length of 1, the output channel number is c, and then three layers of residual blocks based on SE-ResNet are arranged, wherein the first residual block comprises 3 sub-residual blocks, the second residual block comprises 4 sub-residual blocks, the third residual block comprises 6 sub-residual blocks, and for each sub-residual block, two layers of convolutional layers with the convolutional kernel size of 3x3 are included; each residual block performs downsampling operation on the image in the first layer convolution of the first sub-residual block, and channels are doubled, wherein the final channel number is 4 c; then, through convolutional layers with the sizes of two layers of convolutional kernels of 3x3 and the step length of 1, the number of output channels is 4c, and finally through convolutional layers with the sizes of 4 expansion convolutional layers and the step length of 1, the sizes of the convolutional kernels are 3x3, the number of the output channels is 4c, so that the image is reduced to the original 1/4, and the number of the channels is 4 c;
B) the decoder part in the generation network firstly passes through a layer of deconvolution layer with the convolution kernel size of 4x4, the step length of 2 and the output channel of c/2, and then passes through a layer of convolution layer with the convolution kernel size of 3x3, the step length of 1 and the output channel of c/2; the convolution layers passing next are the same as above except that the output channel of the former deconvolution layer is c/4, the output channel of the latter ordinary convolution layer is also c/4, so far, the image is restored to be the same size as the original image, but the number of the channels is c/4; then, two convolution layers with convolution kernel size of 3x3, step length of 1 and channel number of c/8 and 3 respectively are passed through, and finally one sigmoid layer is passed through.
3. The image inpainting method based on the newly formed network structure according to claim 1, wherein the discriminant networks in the step S3 are divided into a local discriminant network and a global discriminant network, and the deployment thereof specifically includes:
a) the local discrimination network is used for discriminating the true and false of the missing local generation and consists of five convolution layers and a full-connection layer, the sizes and the step lengths of the first five convolution layers are respectively 5 and 2, the number of output channels is c, 2c, 4c, 8c and 8c in sequence, and the BatchNorm and ReLU operation is carried out on each convolution layer; the output of the full connection layer is 1024, and the output is carried out after one layer of ReLU, namely the final output is a 1024-dimensional vector;
b) the global discrimination network is used for discriminating the truth of global generation, is the same as the local discrimination network and finally outputs a 1024-dimensional vector;
c) and splicing the two 1024-dimensional vectors to obtain a 2048-dimensional vector.
4. The method according to claim 1, wherein the step S5 is performed by calculating the L2 distance between the restored picture and the original picture as the reconstruction loss function of the generated network:
Figure FDA0002299953450000021
the gradient update is performed using an adapelta optimizer.
5. The method according to claim 1, wherein the training of the discriminative network in step S7 specifically includes the following steps:
s7-1: fixing the parameters of the generation network, generating a random missing image, and sending the image into the trained generation network to obtain a repaired image G (x)0);
S7-2: sending two groups of image pairs into a discrimination network, wherein the first group of image pairs are an original image x and a repaired image G (x)0) Combining with original image x, splicing the original missing image and the original image-nondefective part, and sending them into the discrimination network, i.e. the first group of inputs of the discrimination network is x, x (1-M) + G (x)0) M; the second set of image pairs is the original image portion M x and the restored image portion M G (x)0));
S7-3: constructing a loss function LD=log D(g1)+log(1-D(g2)),g1And g2Respectively, step S7-2 is shown to result in two inputs of the two sets, resulting in two losses LrealAnd LfakeFinally, the loss of the network is judged to be (L)real+Lfake)*α/2, gradient update was performed using an Adadelta optimizer.
6. The method for image inpainting based on newly formed network structure according to claim 4 or 5, wherein the specific steps of jointly training the generation network and the discrimination network in step S9 are as follows:
s9-1: training a discrimination network;
s9-2: training the generation network, training the generation network by the joint discrimination network, and obtaining LfakeTraining the generating network as an aid to negation, i.e. Ladv=-LfakeThus, the generatorHas a loss function of LG=Lrec+α*LadvGradient updates are performed using an adapelta optimizer, which iterates through the entire training set several times.
CN201911217769.8A 2019-12-03 2019-12-03 Image restoration method based on generated network structure Active CN111161158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911217769.8A CN111161158B (en) 2019-12-03 2019-12-03 Image restoration method based on generated network structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911217769.8A CN111161158B (en) 2019-12-03 2019-12-03 Image restoration method based on generated network structure

Publications (2)

Publication Number Publication Date
CN111161158A true CN111161158A (en) 2020-05-15
CN111161158B CN111161158B (en) 2022-08-26

Family

ID=70556485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911217769.8A Active CN111161158B (en) 2019-12-03 2019-12-03 Image restoration method based on generated network structure

Country Status (1)

Country Link
CN (1) CN111161158B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612721A (en) * 2020-05-22 2020-09-01 哈尔滨工业大学(深圳) Image restoration model training method and device and satellite image restoration method and device
CN111899191A (en) * 2020-07-21 2020-11-06 武汉工程大学 Text image restoration method and device and storage medium
CN112465718A (en) * 2020-11-27 2021-03-09 东北大学秦皇岛分校 Two-stage image restoration method based on generation of countermeasure network
CN114331903A (en) * 2021-12-31 2022-04-12 电子科技大学 Image restoration method and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460746A (en) * 2018-04-10 2018-08-28 武汉大学 A kind of image repair method predicted based on structure and texture layer
US20190049540A1 (en) * 2017-08-10 2019-02-14 Siemens Healthcare Gmbh Image standardization using generative adversarial networks
CN109801230A (en) * 2018-12-21 2019-05-24 河海大学 A kind of image repair method based on new encoder structure
CN110458765A (en) * 2019-01-25 2019-11-15 西安电子科技大学 The method for enhancing image quality of convolutional network is kept based on perception

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190049540A1 (en) * 2017-08-10 2019-02-14 Siemens Healthcare Gmbh Image standardization using generative adversarial networks
CN108460746A (en) * 2018-04-10 2018-08-28 武汉大学 A kind of image repair method predicted based on structure and texture layer
CN109801230A (en) * 2018-12-21 2019-05-24 河海大学 A kind of image repair method based on new encoder structure
CN110458765A (en) * 2019-01-25 2019-11-15 西安电子科技大学 The method for enhancing image quality of convolutional network is kept based on perception

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612721A (en) * 2020-05-22 2020-09-01 哈尔滨工业大学(深圳) Image restoration model training method and device and satellite image restoration method and device
CN111612721B (en) * 2020-05-22 2023-09-22 哈尔滨工业大学(深圳) Image restoration model training method and device and satellite image restoration method and device
CN111899191A (en) * 2020-07-21 2020-11-06 武汉工程大学 Text image restoration method and device and storage medium
CN111899191B (en) * 2020-07-21 2024-01-26 武汉工程大学 Text image restoration method, device and storage medium
CN112465718A (en) * 2020-11-27 2021-03-09 东北大学秦皇岛分校 Two-stage image restoration method based on generation of countermeasure network
CN114331903A (en) * 2021-12-31 2022-04-12 电子科技大学 Image restoration method and storage medium
CN114331903B (en) * 2021-12-31 2023-05-12 电子科技大学 Image restoration method and storage medium

Also Published As

Publication number Publication date
CN111161158B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN111161158B (en) Image restoration method based on generated network structure
CN111292264B (en) Image high dynamic range reconstruction method based on deep learning
CN109377452B (en) Face image restoration method based on VAE and generation type countermeasure network
CN111612708B (en) Image restoration method based on countermeasure generation network
CN113689517B (en) Image texture synthesis method and system for multi-scale channel attention network
CN109920021B (en) Face sketch synthesis method based on regularized width learning network
CN110895795A (en) Improved semantic image inpainting model method
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
CN109345604B (en) Picture processing method, computer device and storage medium
CN116645369B (en) Anomaly detection method based on twin self-encoder and two-way information depth supervision
CN107392213A (en) Human face portrait synthetic method based on the study of the depth map aspect of model
CN114022506A (en) Image restoration method with edge prior fusion multi-head attention mechanism
CN116645716A (en) Expression Recognition Method Based on Local Features and Global Features
CN114581789A (en) Hyperspectral image classification method and system
CN112686822B (en) Image completion method based on stack generation countermeasure network
CN116823647A (en) Image complement method based on fast Fourier transform and selective attention mechanism
CN115984949B (en) Low-quality face image recognition method and equipment with attention mechanism
CN114529450B (en) Face image super-resolution method based on improved depth iteration cooperative network
CN113205503B (en) Satellite coastal zone image quality evaluation method
CN112529098B (en) Dense multi-scale target detection system and method
CN113436094A (en) Gray level image automatic coloring method based on multi-view attention mechanism
CN113298814A (en) Indoor scene image processing method based on progressive guidance fusion complementary network
CN113554655A (en) Optical remote sensing image segmentation method and device based on multi-feature enhancement
CN113034390A (en) Image restoration method and system based on wavelet prior attention
Nie et al. Image restoration from patch-based compressed sensing measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant