CN112116531A - Partial convolution based image occlusion recovery reconstruction method by utilizing shift depth characteristic rearrangement - Google Patents

Partial convolution based image occlusion recovery reconstruction method by utilizing shift depth characteristic rearrangement Download PDF

Info

Publication number
CN112116531A
CN112116531A CN201910540242.2A CN201910540242A CN112116531A CN 112116531 A CN112116531 A CN 112116531A CN 201910540242 A CN201910540242 A CN 201910540242A CN 112116531 A CN112116531 A CN 112116531A
Authority
CN
China
Prior art keywords
occlusion
image
mask
layer
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910540242.2A
Other languages
Chinese (zh)
Inventor
李月龙
高增斌
高云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN201910540242.2A priority Critical patent/CN112116531A/en
Publication of CN112116531A publication Critical patent/CN112116531A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image occlusion recovery method based on partial convolution and rearranged by using a shift depth characteristic. Compared with the prior related technical method, the method has the advantages that the adaptability is strong, the robustness is good, only the shielding image and the shielding object mask need to be input in the use stage, the manual intervention is not needed in the process, the training process is stable, the balance of removing the shielding to restore the original appearance and keeping the authenticity is fully kept, and the like. The main innovation of the method is that the problem of image occlusion missing recovery is solved, the capacity of reconstructing the image missing part is given to a computer, and the accurate and detailed image occlusion missing recovery reconstruction relative to the human perception capacity is realized. In the generator, the visible region is controlled to be kept complete through partial convolution updating mask and the rearrangement depth characteristic information is subjected to occlusion region restoration reconstruction, and the control of the discriminator is added, so that the image occlusion of a plurality of partial loss control generators is restored, and the expected original appearance of the image occlusion restoration reconstruction is obtained. The invention brings more excellent solutions to the problems of shading compensation, breakage recovery and the like in practical application.

Description

Partial convolution based image occlusion recovery reconstruction method by utilizing shift depth characteristic rearrangement
Technical Field
The invention belongs to the field of image modeling, computer vision and image generation, and relates to an image occlusion recovery reconstruction method based on partial convolution and rearrangement by using a shift depth feature, which mainly aims at the reconstruction of image information loss caused by an occlusion problem in an image.
Background
It is known that the development and application of deep learning are dramatically improved based on the improvement of the capability of modern computers. In the face of an image, even if some images are incomplete due to occlusion deletion, at present, a human can compensate and restore the deleted parts in the images to a certain degree through self capacity to acquire complete information in the images. The powerful capability is given to the computer through a perfect intelligent algorithm, and a more excellent solution is brought to relevant problems of shielding compensation, damage recovery and the like in practical application.
In the outbreak of deep learning, a great amount of data are learned, an image in one of the fields of computer vision frontier research is automatically generated, one inspiration in the direction comes from a countermeasure network algorithm of a zero-sum game idea in a game theory, and a better breakthrough is obtained.
The countermeasure idea also caters for the work of removing the shielding and restoring the reconstruction, and what we need to do is to remove the shielding, so we can not randomly generate a vivid image, need to restrict the generated shielding part, control the shielding restoring part not to be randomly filled, perfectly control the visible region not to be damaged by adding the accurate positioning shielding region, and restore the original appearance of the shielding region by using the updated mask and the rearranged depth characteristic information.
Disclosure of Invention
The invention aims to solve the problem of image occlusion missing compensation, endow a computer with the capability of reconstructing an image occlusion missing part, and realize accurate and fine image occlusion compensation reconstruction relative to human perception capability. In order to realize the function, the invention designs a comprehensive method strategy consisting of a method training stage and an online use stage, and the specific technical implementation scheme is as follows.
The image occlusion recovery reconstruction method based on partial convolution and rearranged by using the shift depth features basically comprises the following steps:
the method comprises a training stage:
(1) establishing an occlusion image training sample set constructed by combining real images of forty different objects and 1000 real background images, carrying out normalization processing, calibrating masks of the forty different real images, and taking 1000 non-occlusion images as real image labels;
(2) inputting by using a synthesized occlusion image and a mask corresponding to an occlusion object, and before inputting, inputting an image without occlusion of the occlusion image each time so as to enable a shift layer to acquire real data, and outputting an image with the size of 256 × 256 after passing through a series of convolution layers, an deconvolution layer with connected convolution characteristics, a batch normalization layer and an activation layer;
(3) controlling the part without shielding in the generator to keep the original output, and only generating, restoring and reconstructing the shielded area in the image;
(4) aiming at the shifting layer, the convolution layer characteristics and the deconvolution layer characteristic information introduced and connected with the convolution characteristics are obtained from an original image, the information of an unknown region in the deconvolution layer can be found by searching a known region in the convolution layer with neighbors, and the shifting layer is optimized through a real image input every time;
Uy=X*(y)-y
wherein, UyRepresents the displacement vector, X (y) represents the information in the occlusion region decoder, and y represents the real information of the occlusion region.
(5) Inputting the generated image obtained in the step (2) and the real image without occlusion in the step (1) into a discriminator of the model;
(6) optimizing the discriminator according to the difference between the output low-dimensional features after passing through the convolution layer, the batch normalization layer and the activation layer of the discriminator in the step (5) and the low-dimensional features of the non-shielding expected output of the low-dimensional features, wherein the optimization objective function of the discriminator is as follows:
LD=E[logP(D=real|XGT)]+E[logP(D=fake|G(Xocclusion,Mask))]
wherein E represents entropy, D represents a discriminator, and G represents a generator; xGTFor real images, D ═ real denotes discriminatorsOutputting the expectation as a real image; xocclusionRepresenting the input image with occlusion, Mask representing the Mask of the occlusion, G (X)occlusionMask) is an occlusion recovery reconstructed image, and D ═ fake represents that the output of the discriminator is expected to be the occlusion recovery reconstructed image;
(7) optimizing the producer with the goal of spoofing the arbiter to obtain the expected output, i.e. jointly optimizing the producer with the gap terms for the producer described in steps (3) (4) (6), the optimization objective function of the producer is:
LG_L1=||G(Xocclusion,Mask)⊙XGT)||1
LG_hole=||(1-Mask)⊙(G(Xocclusion,Mask)-XGT)||1
LG_valid=||Mask⊙(G(Xocclusion,Mask)-XGT)||1
Figure BSA0000184791750000021
LG=E[logP(D=real|G(Xocclusion,Mask))]+LG_L1+LG_hole+LG_valid+LG_shift
wherein E represents entropy, D represents a discriminator, and G represents a generator; xGTD ═ real represents that the output of the discriminator is expected to be a real image; xocclusionRepresenting the input image with occlusion, Mask representing the Mask of the occlusion, G (X)occlusionMask) to restore the reconstructed image for occlusion, LG_L1As a whole, LG_holeFor the occlusion region, LG_validIs a visible region, LG_shfitIs a shift layer.
Repeating the steps (2) to (7) until the training model is converged, namely the loss value reaches a state of not obviously descending any more, and storing the parameters of the model generator for use in an online use stage;
and (3) an online use stage:
(8) loading parameters of a generator of the model;
(9) and inputting the picture with the obstruction and the mask corresponding to the obstruction into a model generator to obtain and store the expected image after removing the obstruction and restoring the reconstruction.
The invention relates to an image occlusion recovery reconstruction method based on partial convolution and rearranged by using shift depth characteristics, which has the following main advantages compared with the prior art:
(1) the invention takes image shielding as a main problem to carry out deep research, and realizes more accurate and more detailed image loss compensation reconstruction relative to human perception capability.
(2) The method takes the countermeasure thought as the leading factor, accurately positions the sheltered area, updates the mask according to the shift to control the sheltered restoration area, accurately controls the visible area and the sheltered restoration reconstruction area, and enhances the performance of the restoration reconstruction method.
(3) Due to the adoption of a rearrangement depth characteristic information strategy, the method actively fills the region needing to be restored and reconstructed with the information around the shielded region at the same time of the shielded restoration and reconstruction, namely, the information for filling the shielded region is searched in the visible region in the image; the time consumption of the training stage is reduced, and the shielding recovery reconstruction is stably controlled.
(4) The partial convolution and rearrangement depth feature matching strategy adopted by the invention updates the mask control recovery reconstruction area through partial convolution, the rearrangement depth feature controls the shielding recovery, the partial convolution and the rearrangement depth feature are perfectly combined to effectively enhance the image shielding recovery effect, the image shielding recovery quality and the image shielding recovery efficiency are ensured, and the perfect balance of the partial convolution and the rearrangement depth feature is achieved.
(5) The method has high automation degree, basically does not need human intervention in the actual use stage, can perfectly recover the shielding position by accurately positioning the shielding image, and has good robustness.
(6) The method is not specific to image data of any given category, and is more universal in the aspect of overall strategy, and only needs to be trained by a large amount of data of corresponding categories. The obtained model can recover the shielded area, and the application range is wider, so that the method has wider application prospect.
Drawings
The invention relates to a network structure frame diagram.
Detailed Description
The invention takes the discriminator and the generator as main bodies, and the partial convolution is matched with the rearrangement depth characteristic, thereby obtaining the perfect image shielding recovery effect. The method of practicing the invention will be described and illustrated in greater detail below:
the method comprises a training stage:
the partial convolution of the invention utilizes the image shielding recovery model rearranged by the displacement depth characteristics to train in a certain number of training samples, so the implementation of the method of the invention firstly faces the problem of selecting the training data. In order to ensure accurate control of the shielding condition, the invention takes the synthesized shielding image as a training sample, and has higher requirements on the number of samples due to the adoption of a model generating method, in the specific operation process, 40 real objects are used as shielding objects, 1000 pictures are used as backgrounds, and 40000 pictures are synthesized together to be used as training samples. In order to be able to accurately locate the occlusion region, a mask of 40 objects is also used as input.
The method comprises the steps of preprocessing and uniformly standardizing synthesized image data of a training sample, taking an image with a shelter and a mask corresponding to the shelter as input, inputting an image without the shelter corresponding to the image each time in order to enable a shifting layer to obtain real data, but not performing back propagation. The output of the generator of the model is an image which is restored and reconstructed by occlusion, the image and a real non-occlusion image are put together to be used as the input of a model discriminator, and the effect of restoring and reconstructing the image is judged in the discriminator.
In the aspects of optimization strategies of the model and learning strategies of the parameters, considering that the invention adopts a deep network structure, which comprises a plurality of neurons and a large number of learning parameters, a gradient descent method is adopted for learning and adjusting the weight. In general, the optimization objective of the discriminator is to distinguish whether the image occlusion recovery degree is true or false, the optimization objective of the generator is to control the perfect recovery of the image occlusion region, and the discriminator is deceived by recovering the image occlusion region according to the rearrangement depth information on the premise of keeping the visible region unchanged. The whole model is continuously trained by the process until the model converges, and the structural similarity of the PSNR (Peak Signal-to-Noise Ratio) and the SSIM (structural similarity) is calculated to obtain a high-quality result.
Finally, the learned weight values are stored, so that the method is convenient for later extension type training or the use in the next online use stage.
And (3) an online use stage:
after learning and construction of the method are completed through an offline process, the method can be used online and fully automatically, and the complete recovery effect of image occlusion removal can be achieved only by inputting occlusion images and masks corresponding to the occlusions and without any manual intervention.
For any image needing to remove the blocking object, the method of the invention adopts the following processing and analyzing steps to recover the image blocking in sequence:
first, the mask of the obstruction is determined as input along with the occlusion image, and then a model is built that matches the generator used in the training process. And then loading the parameter file saved in the training stage into the well-defined model. And finally, transmitting the data into a model according to the input data to perform a series of calculation operations, and finally obtaining an image with the occlusion removed.

Claims (11)

1. The image occlusion recovery reconstruction method based on partial convolution and rearranged by using the shift depth features comprises the following steps:
the method comprises a training stage:
(1) building real images of forty different objects and 1000 real background images to synthesize an occlusion image training sample set, carrying out normalization processing, calibrating masks of the forty different real images, and taking the 1000 non-occlusion images as real image labels;
(2) inputting a composite occlusion image and a mask corresponding to the occlusion object, and outputting an image of 256 × 256 size after passing through a series of convolution layers, an deconvolution layer introducing connected convolution features, a batch normalization layer and an activation layer (before inputting an image of the occlusion image without occlusion added each time so as to allow a shift layer to acquire real data);
(3) and controlling the non-shielded part in the generator to keep the original output, and only restoring and reconstructing the shielded area in the image.
(4) Aiming at the shift layer, the information of the convolution layer characteristics and the information of the introduced deconvolution layer characteristics connected with the convolution characteristics are obtained from an original image, the information of an unknown region in the deconvolution layer can be found by searching a known region in the convolution layer with neighbors, and the shift layer is optimized through a real image input every time.
(5) Inputting the generated image obtained in the step (2) and the real image without occlusion in the step (1) into a discriminator of the model;
(6) optimizing the discriminator by using the difference between the output low-dimensional features after passing through the convolution layer, the batch normalization layer and the activation layer of the discriminator in the step (5) and the low-dimensional features of the non-shielding expected output;
(7) optimizing the producers with the goal of spoofing the discriminators and thus obtaining the desired output, i.e., jointly optimizing the producers with the gap terms for the producers as described in steps (3) (4) (6);
(8) repeating the steps (2) to (7) until the training model is converged, namely the loss value reaches a state of not obviously descending any more, and storing the parameters of the model generator for use in an online use stage;
and (3) an online use stage:
(9) loading parameters of a generator of the model;
(10) and inputting the picture with the obstruction and the mask corresponding to the obstruction into a model generator to obtain and store the expected image after removing the obstruction and restoring the reconstruction.
2. The image occlusion recovery reconstruction method based on partial convolution and rearrangement using shift depth features according to claim 1, wherein in step (1), the size of the training image is 256 × 256, all the image synthesized occlusions are combined with the background and are clearly recognizable, and the training sample set includes 40000 synthesized images. The number of the shelters is 40, and people, plants and animals are mainly used as the main shelters.
3. The image occlusion recovery reconstruction method based on partial convolution using shift depth feature rebinning of claim 1 is characterized in that in step (2), an image without occlusion is inputted once before the image with occlusion is synthesized, so that the shift layer obtains real data to control the distribution of the rearrangement feature of the shift layer, but the backward transfer is prohibited when the image without occlusion is inputted, and the generator is not affected. When a composite picture with a shelter is input, a mask of the shelter is input at the same time, the composite picture with the shelter and the mask enter a series of convolution layers at the same time, an deconvolution layer connected with convolution features, a batch normalization layer and an activation layer are introduced, a previous updated mask is used on each layer, 255 pixels in the output mask are fewer and fewer after the mask is updated along with the increase of the number of network layers, the area of an effective area in the output image is larger and larger, and the influence of the mask on the overall loss is smaller and smaller. Thereby outputting the desired 256 x 256 image.
4. The image occlusion recovery reconstruction method based on partial convolution and rearrangement using shift depth features of claim 1, wherein in step (3), the synthesized occlusion image and the mask enter the simultaneous control generator for the occluded region recovery at the same time, so as to keep the whole from generating too large difference, and to emphasize the un-occluded region, i.e. the visible region, from being changed, thereby achieving the purpose of only removing the occlusion to recover the reconstructed image. The objective function of the control is as follows
LG_L1=||G(Xocclusion,Mask)⊙XGT)||1
LG_hole=||(1-Mask)⊙(G(Xocclusion,Mask)-XGT)||1
LG_valid=||Mask⊙(G(Xocclusion,Mask)-XGT)||1
Figure FSA0000184791740000021
Wherein G represents a generator; xGTFor real images, XocclusionRepresenting the input image with occlusion, Mask representing the Mask of the occlusion, G (X)occlusionMask) restores the reconstructed image for occlusion. L isG_L1As a whole, LG_holeFor the occlusion region, LG_validIs a visible region, LG_shfitIs a shift layer.
5. The image occlusion recovery reconstruction method based on partial convolution with shifted depth feature rebinning of claim 1, characterized in that in step (4), additionally adding a shift layer in the third layer of the deconvolution layer with connected convolution features for depth feature rebinning.
Uy=X*(y)-y
Wherein, UyRepresents the displacement vector, X (y) represents the information in the occlusion region decoder, and y represents the real information of the occlusion region.
6. The method according to claim 1, wherein in step (5), the occlusion recovery reconstructed image 256 × 256 and the image 256 × 256 without occlusion added obtained by the generator are added to the discriminator.
7. The image occlusion recovery reconstruction method based on partial convolution with a re-shuffled shift depth feature of claim 1, characterized in that in step (6), the optimized objective function of the discriminator is:
LD=E[log P(D=real|XGT)]+E[log P(D=fake|G(Xocclusion,Mask))]
wherein E represents entropy, D represents discriminator, and G represents generationA machine; xGTD ═ real represents that the output of the discriminator is expected to be a real image; xocclusionRepresenting the input image with occlusion, Mask representing the Mask of the occlusion, G (X)occlusionMask) is an occlusion recovery reconstructed image, and D ═ fake represents that the output of the discriminator is expected to be the occlusion recovery reconstructed image;
layer type Number of cores Nucleus size Step size Batch normalization layer Activating a function Convolutional layer 64 4*4 2 Is provided with LReLU Convolutional layer 128 4*4 2 Is provided with LReLU Convolutional layer 256 4*4 2 Is provided with LReLU Convolutional layer 512 4*4 1 Is provided with LReLU Convolutional layer 1 4*4 1 Is free of
8. The image occlusion recovery reconstruction method rearranged by using a shift depth feature based on partial convolution of claim 1 is characterized in that, in the step (7), the optimized objective function of the generator is:
LG=E[log P(D=real|G(Xocclusion,Mask))]+LG_L1+LG_hole+LG_valid+LG_shift
wherein E represents entropy, D represents a discriminator, and G represents a generator; xGTD ═ real represents that the output of the discriminator is expected to be a real image; xocclusionRepresenting the input image with occlusion, Mask representing the Mask of the occlusion, G (X)occlusionMask) to restore the reconstructed image for occlusion, LG_L1As a whole, LG_holeFor the occlusion region, LG_validIs a visible region, LG_shfitIs a shift layer.
Figure FSA0000184791740000031
Figure FSA0000184791740000041
9. The image occlusion recovery reconstruction method based on partial convolution and rearrangement with shift depth features as claimed in claim 1, wherein in step (8), when the loss value is no longer significantly decreased, the Peak Signal-to-Noise Ratio (PSNR) and the structural similarity (structural similarity) of ssim (structural similarity) are calculated to obtain good results, so as to determine whether to stop iteration; and saving the parameters of the generator network layer of the model to a file on a computer hard disk.
10. The image occlusion recovery reconstruction method based on partial convolution with shift depth feature rearrangement of claim 1, characterized in that in step (9), a network layer structure model of the generator is defined, and network layer parameters of the generator are loaded from a computer hard disk.
11. The method according to claim 1, wherein in step (10), a mask requiring an occlusion recovery reconstructed image and a corresponding occlusion object is input into the generator model loaded in step (9), and the occlusion recovery reconstructed image is output.
CN201910540242.2A 2019-06-21 2019-06-21 Partial convolution based image occlusion recovery reconstruction method by utilizing shift depth characteristic rearrangement Pending CN112116531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910540242.2A CN112116531A (en) 2019-06-21 2019-06-21 Partial convolution based image occlusion recovery reconstruction method by utilizing shift depth characteristic rearrangement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910540242.2A CN112116531A (en) 2019-06-21 2019-06-21 Partial convolution based image occlusion recovery reconstruction method by utilizing shift depth characteristic rearrangement

Publications (1)

Publication Number Publication Date
CN112116531A true CN112116531A (en) 2020-12-22

Family

ID=73796045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910540242.2A Pending CN112116531A (en) 2019-06-21 2019-06-21 Partial convolution based image occlusion recovery reconstruction method by utilizing shift depth characteristic rearrangement

Country Status (1)

Country Link
CN (1) CN112116531A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436245A (en) * 2021-08-26 2021-09-24 武汉市聚芯微电子有限责任公司 Image processing method, model training method, related device and electronic equipment
CN117928565A (en) * 2024-03-19 2024-04-26 中北大学 Polarization navigation orientation method under complex shielding environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436245A (en) * 2021-08-26 2021-09-24 武汉市聚芯微电子有限责任公司 Image processing method, model training method, related device and electronic equipment
CN117928565A (en) * 2024-03-19 2024-04-26 中北大学 Polarization navigation orientation method under complex shielding environment
CN117928565B (en) * 2024-03-19 2024-05-31 中北大学 Polarization navigation orientation method under complex shielding environment

Similar Documents

Publication Publication Date Title
CN110827213B (en) Super-resolution image restoration method based on generation type countermeasure network
CN111127308B (en) Mirror image feature rearrangement restoration method for single sample face recognition under partial shielding
CN111047541B (en) Image restoration method based on wavelet transformation attention model
CN110097609B (en) Sample domain-based refined embroidery texture migration method
CN102156875B (en) Image super-resolution reconstruction method based on multitask KSVD (K singular value decomposition) dictionary learning
Wen et al. Video super-resolution via a spatio-temporal alignment network
CN108985457B (en) Deep neural network structure design method inspired by optimization algorithm
CN111460914A (en) Pedestrian re-identification method based on global and local fine-grained features
CN112507617B (en) Training method of SRFlow super-resolution model and face recognition method
CN111915513B (en) Image denoising method based on improved adaptive neural network
CN110516724B (en) High-performance multi-layer dictionary learning characteristic image processing method for visual battle scene
CN116958453B (en) Three-dimensional model reconstruction method, device and medium based on nerve radiation field
CN111986075A (en) Style migration method for target edge clarification
CN110706303A (en) Face image generation method based on GANs
CN112116531A (en) Partial convolution based image occlusion recovery reconstruction method by utilizing shift depth characteristic rearrangement
CN114066747A (en) Low-illumination image enhancement method based on illumination and reflection complementarity
CN111696033A (en) Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide
CN111626944B (en) Video deblurring method based on space-time pyramid network and against natural priori
Chen et al. Learning a multi-scale deep residual network of dilated-convolution for image denoising
Kim et al. Restoring spatially-heterogeneous distortions using mixture of experts network
CN109493279B (en) Large-scale unmanned aerial vehicle image parallel splicing method
CN116524290A (en) Image synthesis method based on countermeasure generation network
CN111064905A (en) Video scene conversion method for automatic driving
CN114943655A (en) Image restoration system for generating confrontation network structure based on cyclic depth convolution
Wu et al. Semantic image inpainting based on generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination