CN116362995A - Tooth image restoration method and system based on standard prior - Google Patents

Tooth image restoration method and system based on standard prior Download PDF

Info

Publication number
CN116362995A
CN116362995A CN202310117668.3A CN202310117668A CN116362995A CN 116362995 A CN116362995 A CN 116362995A CN 202310117668 A CN202310117668 A CN 202310117668A CN 116362995 A CN116362995 A CN 116362995A
Authority
CN
China
Prior art keywords
image
encoder
input
standard
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310117668.3A
Other languages
Chinese (zh)
Inventor
黄超
徐灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pinguo Technology Co Ltd
Original Assignee
Chengdu Pinguo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pinguo Technology Co Ltd filed Critical Chengdu Pinguo Technology Co Ltd
Priority to CN202310117668.3A priority Critical patent/CN116362995A/en
Publication of CN116362995A publication Critical patent/CN116362995A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a dental image restoration method and system based on standard priori. The method of the invention comprises the following steps: step 1, inputting a face image with a low-quality tooth area; step 2, aligning a mouth region by using a face key point detection method to obtain an input image; step 3, inputting the input image into a neural network model for restoration to obtain a high-quality tooth image; the neural network model adopts an Auto-Encoder structure, and the Encoder of the Auto-Encoder structure comprises a main network Encoder and a standard priori Encoder, wherein the standard priori Encoder is obtained after training by taking a dataset of high-quality tooth images as input and output. The invention further provides a system for realizing the method. The invention successfully builds the neural network model with low performance cost and low training sample requirement, can be used for repairing the tooth pattern, and has good application prospect.

Description

Tooth image restoration method and system based on standard prior
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a dental image restoration method and system based on standard priori.
Background
Image restoration refers to a specific image restoration problem in the underlying visual task, and refers to restoring a high quality image from a low quality input image, which typically has various degradation problems such as blurring, noise, jpeg artifacts, and the like. Image restoration itself is an ill-posed problem, the solution of which is not unique, and which introduces some content into the restoration process, which can also lead to some differences in morphology or detail between the restored image and the original image.
In the field of image restoration, the most advanced solution at present is an image restoration technology based on a neural network, and the technology shows a surprisingly encouraging final effect from the extraction of image features to the reconstruction of the whole image. In the prior art, the most common network structure used for image restoration is an Auto-Encoder structure, namely, an image is firstly subjected to network downsampling of a plurality of layers to obtain a latent code of the whole image, and then upsampling is carried out from the latent code to restore the image. This process is accompanied by loss of information and reconstruction of information.
The image restoration technology such as Auto-Encoder is that the image mapping technology is used for restoring the image at most, namely, an artificial simulated degradation function is used for generating a low-quality input image, then a high-quality original image is used as GT for paired training, namely, a network learns the mapping relation between the low-quality image and the high-quality image, so that final image restoration is carried out.
The network structure using the mapping relationship requires a large amount of sample data and artificially designed degradation functions, and these degradation processes cannot fit the degradation model in the real world, so that most of scenes cannot be adapted, and only good results can be obtained for some specific degradation scenes.
Dental images play an important role in dental diagnosis and research, and for dental image samples, the collection cost is high, and an effective degradation function cannot be designed. Therefore, only a small amount of (low and high quality) paired data is typically available for dental image restoration tasks, which makes existing image restoration network models unsuitable for dental image restoration tasks.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a tooth image restoration method and a tooth image restoration system based on standard priori, which realize that a network model for tooth image restoration is obtained by training a small amount of paired data through improving the structure of an image restoration network model.
A dental image restoration method based on standard priors, comprising the steps of:
step 1, inputting a face image with a low-quality tooth area;
step 2, aligning a mouth region by using a face key point detection method to obtain an input image;
step 3, inputting the input image into a neural network model for restoration to obtain a high-quality tooth image;
the neural network model adopts an Auto-Encoder structure, and the Encoder of the Auto-Encoder structure comprises a main network Encoder and a standard priori Encoder, wherein the standard priori Encoder is obtained after training by taking a dataset of high-quality tooth images as input and output.
Preferably, step 2 specifically includes:
step 2.1, detecting by using a face key point detection method to obtain a second image aligned with the mouth area;
step 2.2, generating a third image and a fourth image by using a face key point detection method, wherein the mouth area is distinguished in the third image, and the lip area and the tooth area are distinguished in the fourth image;
and 2.3, splicing the third image and the fourth image according to the channels to obtain an input image.
Preferably, the Encoder of the Auto-Encoder structure further comprises an identity-preserving Encoder, the identity-preserving Encoder comprising a spatial attention and channel attention module.
Preferably, the image used for inputting the identity-preserving encoder is obtained by splicing the second image, the third image and the fourth image according to channels.
Preferably, three Loss functions of L1 Loss, GAN Loss and perception Loss are adopted for training in the neural network model training process.
Preferably, when the neural network model performs convolution operation, scaling coefficients are used to scale weights of all layers, and the scaling coefficients are calculated in the following manner:
scale=1/sqrt(input_channels*kernel*kernel)
where scale is the scaling factor, sqrt () is the open square function, input_channels is the number of input channels for this layer, and kernel is the convolution kernel size for this layer.
Preferably, the Encoder of the Auto-Encoder structure convolves the image up-dimension and down-samples 8 times, and extracts the latent code, wherein the dimension of the latent code is 128 dimensions;
the input of the decoder of the Auto-Encoder structure is 256-dimensional features after splicing, and the 256-dimensional features are convolved, dimension reduced and layer-by-layer bilinear up-sampled in the decoder.
The invention also provides a system for realizing the dental image restoration method based on the standard prior, which comprises the following steps:
the input module is used for inputting a face image with a low-quality tooth area;
the image preprocessing module is used for aligning the mouth area by using a face key point detection method to obtain an input image;
the restoration module is used for inputting the input image into a neural network model for restoration to obtain a high-quality tooth image;
the output module is used for outputting high-quality tooth images;
the neural network model adopts an Auto-Encoder structure, and the Encoder of the Auto-Encoder structure comprises a main network Encoder and a standard priori Encoder, wherein the standard priori Encoder is obtained after training by taking a data set of high-quality tooth images as input and output.
The present invention also provides a computer readable storage medium having stored thereon a computer program for implementing the above-described standard prior based dental image restoration method.
According to the invention, the standard priori is added into the neural network model, so that a realistic tooth restoration effect can be generated, training can be realized by using few training data, the neural network model with excellent restoration performance is obtained, the difficulty in collecting and labeling samples is greatly reduced, and the neural network model has good adaptability to a restoration task of a tooth image.
It should be apparent that, in light of the foregoing, various modifications, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
The above-described aspects of the present invention will be described in further detail below with reference to specific embodiments in the form of examples. It should not be understood that the scope of the above subject matter of the present invention is limited to the following examples only. All techniques implemented based on the above description of the invention are within the scope of the invention.
Drawings
FIG. 1 is a schematic diagram of a neural network model of embodiment 1;
fig. 2 is an exemplary diagram of a second image of embodiment 1;
fig. 3 is an exemplary diagram of a third image of embodiment 1;
fig. 4 is an exemplary diagram of a fourth image of embodiment 1;
FIG. 5 is a diagram showing an example of training image of the standard prior encoder of embodiment 1;
fig. 6 is a diagram showing an example of the repair effect of the neural network model of embodiment 1.
Detailed Description
It should be noted that, in the embodiments, algorithms of steps such as data acquisition, transmission, storage, and processing, which are not specifically described, and hardware structures, circuit connections, and the like, which are not specifically described may be implemented through the disclosure of the prior art.
Example 1 dental image restoration method and System based on Standard priors
The system of the present embodiment includes:
the input module is used for inputting a face image with a low-quality tooth area;
the image preprocessing module is used for aligning the mouth area by using a face key point detection method to obtain an input image;
the restoration module is used for inputting the input image into a neural network model for restoration to obtain a high-quality tooth image;
the output module is used for outputting high-quality tooth images;
the neural network model adopts an Auto-Encoder structure, and the Encoder of the Auto-Encoder structure comprises a main network Encoder, a standard priori Encoder and an identity consistency maintaining Encoder.
The dental image restoration method of the embodiment comprises the following steps:
step 1, inputting a face image with a low-quality tooth area;
step 2, aligning a mouth region by using a face key point detection method to obtain an input image;
and step 3, inputting the input image into a neural network model for restoration, and obtaining a high-quality tooth image.
When the system and the method of the embodiment are adopted, the neural network model construction and training processes are as follows:
1. the neural network of the embodiment adopts an Auto-encoding structure, the structure of which is shown in fig. 1, in the embodiment, when the convolution operation is performed, a scaling factor calculated according to the size of the input feature is used to scale the weight of each layer, so as to stabilize the training flow, specifically, the calculation mode of the scaling factor is as follows:
scale=1/sqrt(input_channels*kernel*kernel)
wherein. scale is the scaling factor, sqrt () is the open square function, input_channels is the number of input channels for this layer, and kernel is the convolution kernel size for this layer.
The encoder carries out convolution dimension up and 8 times downsampling on the image, extracts the latent codes, and meanwhile, the dimension of the latent codes is 128 dimensions, so that efficient reasoning can be carried out on the mobile terminal.
1.1 since downsampling inevitably results in loss of image information, in order to be able to reconstruct the high frequency detail part, corresponding encoder features are introduced at the decoder stage for splicing, i.e. there is a skip connection, so the input to the decoder is 256 dimensions, the features of this part of the encoder contain rich high frequency details, which is advantageous for the final effect restoration.
1.2, the spliced 256-dimensional features are subjected to convolution dimension reduction and layer-by-layer bilinear upsampling in a decoder, and the original resolution is gradually restored.
1.3 because the simple Auto-Encoder is simply the compression of the image features and cannot achieve efficient restoration, the embodiment uses a face key point detection algorithm first, and then combines with similarity transformation to obtain a second image of the aligned mouth region (as shown in fig. 2). This embodiment uses alignment sizes of 512x256 widths and heights, respectively. After obtaining the mouth region from the key points of the face, filling the region with 0 to obtain a third image (as shown in fig. 3) so as to force the network to pay attention to the region, and as the region contains lips and teeth, in order to better perform repair work, the lip region and the tooth region can be distinguished according to the key points of the face, mainly because the lips of each person are different, and the brightness and the shape of the teeth in the tooth region are different, the region is further subdivided, and the average value of the two parts in the region is taken to obtain a fourth image (as shown in fig. 4) for guiding the network to perform more reasonable effect output.
2. In order to better guide the network to repair, the embodiment introduces a standard priori encoder to guide the final tooth repair, so that the network is forced to generate more real and structurally reasonable tooth morphology. The standard a priori encoder is shown in the uppermost part of fig. 1.
2.1 in particular, the Auto-Encoder network architecture is used to obtain the latent image coding, noting that there is no skip connection between the Encoder and decoder, thus enabling a more compact and pure latent code, which is beneficial to obtaining a more well-targeted standard prior. Only the standard image (high quality image) shown in fig. 5 is used for training, the input and output are all fig. 5, the Loss is trained by using three Loss functions of L1 Loss, GAN Loss and perception Loss together, the final network outputs an image basically consistent with the input, after training to a convergence state, the encoder at the moment has already extracted the standard priori information of the standard image, and the information can be finally used for guiding the main repair network. Only the encoder is reserved and the decoder is removed when in use.
3. The main network structure (such as the middle part of fig. 1) detects the whole face by using the key points of the face, and then aligns the mouth area to obtain a second image after aligning the teeth. And simultaneously, generating a corresponding third image and a corresponding fourth image according to the key points of the human face, splicing the third image and the fourth image to obtain an input image, introducing the last 4 layers of characteristics of a standard priori encoder in the 2.1 part, namely accessing the standard priori information in the 2.1 part, wherein the part of information comprises the standard tooth form and trend information, and the accessing mode is to directly add the standard priori information to a main network codec, so that the part of characteristics have positive forward guiding effect on generating the tooth edges with clearer edges.
3.1 in order to maintain identity consistency, an identity consistency encoder is added in the main network structure (such as the lowest part of fig. 1), the second image, the third image and the fourth image are spliced into 9-channel input according to channels, the 9-channel input is passed through the identity consistency encoder, and then the characteristics of the parts are subjected to characteristic fusion in the main network structure. If the access mode is simply feature splicing, feature integration cannot be well performed, so that repair capability is reduced, and therefore the part of the fusion mode is to pass the features of the identity-keeping encoder through a spatial attention and channel attention module, so that a network can select what is important to be reserved and what is not important to be discarded, and a better repair effect can be obtained. This part of the features is then weighted to the corresponding features of the primary network structure.
3.2 training the network is trained similarly using the three Loss functions of L1 Loss, GAN Loss and perceptual Loss together, and during the training phase the "standard prior" weights are not updated, so this part is constant. The update of the network only includes the primary network (including the encoder and decoder, and the skip connection exists between them) and the identity-consistent encoder.
In the neural network model constructed and trained by the method, the main network comprises abundant 'standard priori' features and 'identity consistency' features, and finally the fused features pass through a main network codec, so that the neural network model has good image restoration capability. An example of a high quality dental image after restoration by the system and method of the present embodiment is shown in fig. 6.
As can be seen from the above embodiments, the entire network structure is a network structure using 3 encoders and one decoder (as shown in fig. 1). Since the standard priori is a feature which is trained in advance, the subsequent network training and reasoning is constant, and does not have excessive performance overhead. Because the scheme uses the standard prior to guide the effect generation of the network, only about 3000 pieces of data are needed, and the cost is reduced extremely low for tens of thousands of neural network training required for motionless. Therefore, the invention successfully builds the neural network model with low performance cost and low training sample requirement, can be used for repairing the tooth pattern, and has good application prospect.

Claims (9)

1. A dental image restoration method based on standard prior, comprising the steps of:
step 1, inputting a face image with a low-quality tooth area;
step 2, aligning a mouth region by using a face key point detection method to obtain an input image;
step 3, inputting the input image into a neural network model for restoration to obtain a high-quality tooth image;
the neural network model adopts an Auto-Encoder structure, and the Encoder of the Auto-Encoder structure comprises a main network Encoder and a standard priori Encoder, wherein the standard priori Encoder is obtained after training by taking a dataset of high-quality tooth images as input and output.
2. A dental image restoration method according to claim 1, wherein: the step 2 specifically comprises the following steps:
step 2.1, detecting by using a face key point detection method to obtain a second image aligned with the mouth area;
step 2.2, generating a third image and a fourth image by using a face key point detection method, wherein the mouth area is distinguished in the third image, and the lip area and the tooth area are distinguished in the fourth image;
and 2.3, splicing the third image and the fourth image according to the channels to obtain an input image.
3. A dental image restoration method according to claim 2, wherein: the Encoder of the Auto-Encoder architecture further includes an identity-preserving Encoder including a spatial attention and channel attention module.
4. A dental image restoration method according to claim 3, wherein: the image used for inputting the identity-keeping encoder is obtained by splicing the second image, the third image and the fourth image according to channels.
5. A dental image restoration method according to claim 1, wherein: in the neural network model training process, three Loss functions of L1 Loss, GAN Loss and perception Loss are adopted for training together.
6. A dental image restoration method according to claim 1, wherein: when the neural network model is used for convolution operation, scaling coefficients are used for scaling the weights of all layers, and the scaling coefficients are calculated in the following way:
scale=1/sqrt(input_channels*kernel*kernel)
where scale is the scaling factor, sqrt () is the open square function, input_channels is the number of input channels for this layer, and kernel is the convolution kernel size for this layer.
7. A dental image restoration method according to claim 1, wherein: the Encoder of the Auto-Encoder structure carries out convolution dimension up on the image and 8 times downsampling, and extracts latent codes, wherein the dimension of the latent codes is 128 dimensions;
the input of the decoder of the Auto-Encoder structure is 256-dimensional features after splicing, and the 256-dimensional features are convolved, dimension reduced and layer-by-layer bilinear up-sampled in the decoder.
8. A system for implementing the standard prior based dental image restoration method of any one of claims 1-7, comprising:
the input module is used for inputting a face image with a low-quality tooth area;
the image preprocessing module is used for aligning the mouth area by using a face key point detection method to obtain an input image;
the restoration module is used for inputting the input image into a neural network model for restoration to obtain a high-quality tooth image;
the output module is used for outputting high-quality tooth images;
the neural network model adopts an Auto-Encoder structure, and the Encoder of the Auto-Encoder structure comprises a main network Encoder and a standard priori Encoder, wherein the standard priori Encoder is obtained after training by taking a data set of high-quality tooth images as input and output.
9. A computer-readable storage medium, characterized by: on which a computer program for implementing a standard a priori based dental image restoration method according to any of claims 1-7 is stored.
CN202310117668.3A 2023-02-15 2023-02-15 Tooth image restoration method and system based on standard prior Pending CN116362995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310117668.3A CN116362995A (en) 2023-02-15 2023-02-15 Tooth image restoration method and system based on standard prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310117668.3A CN116362995A (en) 2023-02-15 2023-02-15 Tooth image restoration method and system based on standard prior

Publications (1)

Publication Number Publication Date
CN116362995A true CN116362995A (en) 2023-06-30

Family

ID=86930925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310117668.3A Pending CN116362995A (en) 2023-02-15 2023-02-15 Tooth image restoration method and system based on standard prior

Country Status (1)

Country Link
CN (1) CN116362995A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876242A (en) * 2024-03-11 2024-04-12 深圳大学 Fundus image enhancement method, fundus image enhancement device, fundus image enhancement apparatus, and fundus image enhancement program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876242A (en) * 2024-03-11 2024-04-12 深圳大学 Fundus image enhancement method, fundus image enhancement device, fundus image enhancement apparatus, and fundus image enhancement program
CN117876242B (en) * 2024-03-11 2024-05-28 深圳大学 Fundus image enhancement method, fundus image enhancement device, fundus image enhancement apparatus, and fundus image enhancement program

Similar Documents

Publication Publication Date Title
CN112330574B (en) Portrait restoration method and device, electronic equipment and computer storage medium
CN113012172B (en) AS-UNet-based medical image segmentation method and system
CN111681166A (en) Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit
CN110689599A (en) 3D visual saliency prediction method for generating countermeasure network based on non-local enhancement
CN114627006B (en) Progressive image restoration method based on depth decoupling network
CN111861945A (en) Text-guided image restoration method and system
CN114331831A (en) Light-weight single-image super-resolution reconstruction method
Yuan et al. Single image dehazing via NIN-DehazeNet
CN115457043A (en) Image segmentation network based on overlapped self-attention deformer framework U-shaped network
CN116362995A (en) Tooth image restoration method and system based on standard prior
CN113516604B (en) Image restoration method
Liu et al. Facial image inpainting using multi-level generative network
CN117541505A (en) Defogging method based on cross-layer attention feature interaction and multi-scale channel attention
CN117408924A (en) Low-light image enhancement method based on multiple semantic feature fusion network
CN116109510A (en) Face image restoration method based on structure and texture dual generation
CN117097853A (en) Real-time image matting method and system based on deep learning
CN116523985A (en) Structure and texture feature guided double-encoder image restoration method
CN104123707B (en) Local rank priori based single-image super-resolution reconstruction method
CN116258627A (en) Super-resolution recovery system and method for extremely-degraded face image
CN115861108A (en) Image restoration method based on wavelet self-attention generation countermeasure network
CN113781376B (en) High-definition face attribute editing method based on divide-and-congress
CN116778539A (en) Human face image super-resolution network model based on attention mechanism and processing method
CN116958317A (en) Image restoration method and system combining edge information and appearance stream operation
CN114331894A (en) Face image restoration method based on potential feature reconstruction and mask perception
CN116091326A (en) Face image shielding repair algorithm based on deep learning and application research thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination