CN114943655A - Image restoration system for generating confrontation network structure based on cyclic depth convolution - Google Patents
Image restoration system for generating confrontation network structure based on cyclic depth convolution Download PDFInfo
- Publication number
- CN114943655A CN114943655A CN202210553812.3A CN202210553812A CN114943655A CN 114943655 A CN114943655 A CN 114943655A CN 202210553812 A CN202210553812 A CN 202210553812A CN 114943655 A CN114943655 A CN 114943655A
- Authority
- CN
- China
- Prior art keywords
- image
- generator
- network
- similarity
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 125000004122 cyclic group Chemical group 0.000 title claims abstract description 22
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 238000013528 artificial neural network Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 21
- 238000011176 pooling Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 7
- 230000003042 antagnostic effect Effects 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 238000000034 method Methods 0.000 abstract description 12
- 230000008901 benefit Effects 0.000 abstract description 8
- 230000000694 effects Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image restoration system for generating a confrontation network structure based on cyclic depth convolution, which comprises the following modules: the first generator is used for repairing the input image sample to generate a repaired image; the first discriminator is used for comparing the restored image with a preset target image to determine a first similarity between the restored image and the preset target image, and further judging whether the first similarity reaches a preset first similarity threshold value; the second generator is connected with the output end of the first generator and used for restoring the restored image to generate a restored image; and the second discriminator is used for comparing the restored image with the image sample to determine a second similarity between the restored image and the image sample, and further judging whether the second similarity reaches a preset second similarity threshold value. The method combines the advantages of the generation countermeasure network in the aspect of image reconstruction and restoration and the advantages of the convolution neural network in the aspect of image feature extraction, and achieves a better effect in the aspect of image restoration.
Description
Technical Field
The invention relates to the field of image processing, in particular to an image restoration system for generating a confrontation network structure based on cyclic depth convolution.
Background
With continuous innovation and development of deep learning technology, deep learning is widely applied to the field of artificial intelligence, a large number of new network structures and algorithms are promoted, and generation of the antagonistic network is one of the technologies.
And an antagonistic network (GAN) is generated, and the method can play a great role in image reconstruction and restoration. In consideration of the fact that in image research, in order to improve the accuracy of a research result, before a general experiment, appropriate preprocessing, such as image deblurring, image restoration local shielding, image defogging and rain removal, is performed on an image sample in combination with the research.
Disclosure of Invention
Aiming at the defects in the prior art and combining the advantages of the generation countermeasure network in the aspects of image reconstruction and repair, the invention aims to provide an image repair method based on a CDCGAN network structure.
The image restoration system based on the cyclic depth convolution generation confrontation network structure provided by the invention comprises the following modules:
the first generator is used for repairing the input image sample to generate a repaired image;
the first discriminator is used for comparing the repaired image with a preset target image to determine the first similarity between the repaired image and the preset target image, and further judging whether the first similarity reaches a preset first similarity threshold value;
the second generator is connected with the output end of the first generator and used for restoring the repaired image to generate a restored image;
and the second discriminator is used for comparing the restored image with the image sample to determine a second similarity between the restored image and the image sample, and further judging whether the second similarity reaches a preset second similarity threshold value.
Preferably, when the first discriminator determines that the first similarity reaches a preset first similarity threshold, the image sample is repaired;
and when the first discriminator judges that the first similarity does not reach the preset first similarity threshold, inputting the image sample into the first generator again.
Preferably, when the second discriminator determines that the second similarity reaches a preset second similarity threshold, the network parameter of the second generator is saved;
and when the second discriminator judges that the second similarity does not reach a preset second similarity threshold, inputting the image sample into the first generator again.
Preferably, the first generator, the first discriminator, the second generator, and the second discriminator form a cyclic generation network structure;
the network structure is generated in a circulating mode and used for continuously optimizing and updating network parameters, and the repaired image can have more image characteristic information of the image sample.
Preferably, the first generator and the second generator perform feature extraction on the input image by using a resnet50 network as a backbone network.
Preferably, the first and second discriminators employ a full convolution neural network.
Preferably, the first generator and the second generator adopt a mode that a convolution network is matched with a deconvolution network;
the convolution network is used for carrying out feature extraction on the image sample; and the deconvolution network is used for reconstructing and repairing the image according to the characteristic vectors extracted by the convolution network.
Preferably, the first generator and the second generator are provided with a jump connection structure, and the jump connection structure is used for performing channel number splicing on the feature maps obtained by downsampling and corresponding upsampling of the generators, so as to realize effective fusion of multi-level features of the images.
Preferably, the network structure of the generator and the arbiter is designed by:
s1: replacing a pooling layer in the convolutional neural network structure by a convolutional layer, using step convolution to replace the pooling layer for a discriminator, and using fractional step convolution to replace pooling for a generator, wherein the step convolution is used for spatial down-sampling in the discriminator;
s2: removing the full connection layer;
s3: setting batch normalization;
s4: setting an activation function, wherein the generator and the arbiter use different activation functions, setting a RELU function in the generator, setting a tanh function in an output layer of the generator, and using a LEAKRLU function in all layers of the arbiter.
Preferably, the loss function of the generator and the arbiter is:
wherein z represents an image to be processed, x represents a target image, G 1 (z i ) Representing a restored image generated by a first generator, G 2 (y i ) And m is the number of images, and i is the image number.
Compared with the prior art, the invention has the following beneficial effects:
the method combines the strong advantages of the generation of the countermeasure network in the aspect of image reconstruction and restoration and the advantages of the convolutional neural network in the aspect of image feature extraction, so that the method can obtain better effect in the aspect of image restoration, and modifies the network model in order to enable the convolutional neural network to be better suitable for generating the countermeasure network, so that the model is more stable and is easier to learn.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts. Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a model framework diagram of an image inpainting system for generating a countering network structure based on a cyclic depth convolution according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the operation of an image inpainting system for generating a countering network structure in a cyclic depth convolution according to an embodiment of the present invention;
FIG. 3 is a network architecture diagram of a generator in an embodiment of the invention;
fig. 4 is a network structure diagram of the arbiter in the embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. These several specific embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a model framework diagram of an image restoration system for generating a countering network structure based on a cyclic depth convolution according to an embodiment of the present invention, and as shown in fig. 1, the present invention provides an image restoration system for generating a countering network structure based on a cyclic depth convolution, including the following modules:
the first generator is used for repairing the input image sample to generate a repaired image;
the first discriminator is used for comparing the repaired image with a preset target image to determine the first similarity between the repaired image and the preset target image, and further judging whether the similarity reaches a preset first similarity threshold value;
the second generator is connected with the output end of the first generator and used for restoring the repaired image to generate a restored image;
and the second discriminator is used for comparing the restored image with the image sample to determine a second similarity between the restored image and the image sample, and further judging whether the second similarity reaches a preset second similarity threshold value.
In the embodiment of the present invention, the first generator is a Covnet1 network, the second generator is a Covnet2 network, the first discriminator is a Net1 network, and the second discriminator is a Net2 network.
The first similarity threshold and the second similarity threshold are set between 85% and 99%.
FIG. 2 is a flowchart illustrating the operation of an image inpainting system for generating a countering network structure in a cyclic depth convolution according to an embodiment of the present invention; as shown in fig. 2, when the first discriminator determines that the first similarity reaches a preset first similarity threshold, the image sample is repaired;
and when the first discriminator judges that the first similarity does not reach the preset first similarity threshold, inputting the image sample into the first generator again.
When the second discriminator judges that the second similarity reaches a preset second similarity threshold, saving the network parameters of the second generator;
and when the second discriminator judges that the second similarity does not reach a preset second similarity threshold, inputting the image sample into the first generator again.
The first generator, the first discriminator, the second generator and the second discriminator form a cyclic generation network structure;
the network structure is generated in a circulating mode and used for continuously optimizing and updating network parameters under the loss function, and the repaired image can have more image characteristic information of the image sample.
Fig. 3 is a network structure diagram of a generator in an embodiment of the present invention, and as shown in fig. 3, the first generator and the second generator use a resnet50 network as a backbone network to perform feature extraction on an input image. The first generator and the second generator adopt a mode that a convolution network and a deconvolution network are matched for use;
the generator is designed in an encoder-decoder architecture.
The convolution network is used for carrying out feature extraction on the image sample; and the deconvolution network is used for reconstructing and repairing the image according to the characteristic vectors extracted by the convolution network.
The first generator and the second generator are provided with jump connection structures, and the jump connection structures are used for splicing the channel numbers of the feature graphs obtained by down sampling and corresponding up sampling of the generators, so that the effective fusion of the multi-level features of the images is realized. The first discriminator and the second discriminator use a full convolution neural network, wherein convolution layers are replaced with pooling layers and full connection layers.
Fig. 4 is a network structure diagram of the discriminator according to the embodiment of the present invention, as shown in fig. 4, in order to make GAN well adapted to the convolutional neural network,
the network structure of the generator and the discriminator is designed by the following method:
s1: replacing a pooling layer in the convolutional neural network structure by a convolutional layer, using step convolution to replace the pooling layer for a discriminator, and using fractional step convolution to replace pooling for a generator, wherein the step convolution is used for spatial down-sampling in the discriminator;
s2: removing the full connection layer; in the conventional CNN, a fully-connected layer is added behind a convolutional layer to output a final vector, but the parameters of the fully-connected layer are too many, so that the network is easily overfitted;
s3: setting batch normalization; because the number of layers of the deep learning neural network is large, the distribution of output data can be changed by each layer, and the integral deviation of the network is larger and larger along with the increase of the number of layers. The aim of batch normalization is to solve the problem, and the data can be effectively subjected to a certain fixed data distribution by performing normalization processing on the input of each layer;
s4: the activation function is set, the generator and the arbiter use different activation functions, the RELU function is set in the generator, the output layer of the generator is set with tanh function, because it is found that using the bounded activation function allows faster learning of the model and fast coverage of the color space, and the leakrlu function is used in all layers of the arbiter.
In an embodiment of the present invention, the loss function of the generator and the arbiter is:
wherein z represents an image to be processed, x represents a target image, G 1 (z i ) Representing a restored image generated by a first generator, G 2 (y i ) And m is the number of images, and i is the image number.
Preferably, the algorithm adopts a cyclic generation network structure from the image sample to the repaired image and then to the restored image, and continuously optimizes and updates network parameters under the setting of a reasonable loss function, so that the repaired image has more image characteristic information of the image sample.
In fig. 3 and 4, the input of the generator and the discriminator is in RGB3 channel picture format with size 256 × 256, the output of the discriminator is a probability value for judging whether the generated picture is true or false, the output of the generator covnet1 is an image after input image restoration, and the output of the generator covnet2 is a restored image. Parameters behind conv sequentially represent the number of channels, step length and convolution kernel size, and a dotted line in fig. 3 represents that channel number splicing is performed on an up-sampling result and a down-sampling result in a corresponding residual error network, so that effective fusion of multi-level features of an image is realized.
In the embodiment of the invention, the strong advantage of the generation countermeasure network in the aspect of image reconstruction and restoration and the advantage of the convolutional neural network in the aspect of image feature extraction are combined, so that the algorithm provided by the invention can achieve a better effect in the aspect of image restoration. And in order to enable the convolutional neural network to be well suitable for generating the countermeasure network, the network model is modified, so that the model is more stable and easier to learn.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.
Claims (10)
1. An image restoration system for generating a confrontation network structure based on cyclic depth convolution is characterized by comprising the following modules:
the first generator is used for repairing the input image sample to generate a repaired image;
the first discriminator is used for comparing the repaired image with a preset target image to determine the first similarity between the repaired image and the preset target image, and further judging whether the first similarity reaches a preset first similarity threshold value;
the second generator is connected with the output end of the first generator and used for restoring the repaired image to generate a restored image;
and the second discriminator is used for comparing the restored image with the image sample to determine a second similarity between the restored image and the image sample, and further judging whether the second similarity reaches a preset second similarity threshold value.
2. The image inpainting system for generating the antagonistic network structure based on the cyclic depth convolution according to claim 1, wherein when the first discriminator judges that the first similarity reaches a preset first similarity threshold, inpainting the image sample is completed;
and when the first discriminator judges that the first similarity does not reach the preset first similarity threshold, inputting the image sample into the first generator again.
3. The image inpainting system for generating the antagonistic network structure based on the cyclic depth convolution according to claim 1, wherein when a second discriminator judges that the second similarity reaches a preset second similarity threshold, the network parameters of the second generator are saved;
and when the second discriminator judges that the second similarity does not reach the preset second similarity threshold, inputting the image sample into the first generator again.
4. The system for image inpainting based on cyclic depth convolution generation antagonistic network structure of claim 1, wherein the first generator, the first arbiter, the second generator and the second arbiter form a cyclic generation network structure;
the cyclic generation network structure is used for continuously optimizing and updating network parameters, so that the repaired image has more image characteristic information of image samples.
5. The system of claim 1, wherein the first generator and the second generator perform feature extraction on the input image by using a resnet50 network as a backbone network.
6. The system of claim 1, wherein the first and second discriminators employ a full convolution neural network.
7. The system for image inpainting based on cyclic depth convolution generation confrontation network structure of claim 1, characterized in that the first generator and the second generator adopt a mode that a convolution network and a deconvolution network are used together;
the convolution network is used for carrying out feature extraction on the image sample; and the deconvolution network is used for reconstructing and repairing the image according to the characteristic vectors extracted by the convolution network.
8. The system for image restoration based on the cyclic depth convolution generation confrontation network structure as claimed in claim 1, wherein the first generator and the second generator are provided with a jump connection structure, and the jump connection structure is used for performing channel number splicing on the generator down-sampling and the corresponding up-sampling feature maps to realize effective fusion of multi-level features of the images.
9. The system for image inpainting based on cyclic depth convolution generation countermeasure network structure of claim 1, wherein the network structure of the generator and the discriminator is designed by:
s1: replacing a pooling layer in the convolutional neural network structure by a convolutional layer, replacing the pooling layer by step convolution for a discriminator, and replacing pooling by fractional step convolution for a generator, wherein the step convolution is used for spatial down-sampling in the discriminator;
s2: removing the full connection layer;
s3: setting batch normalization;
s4: setting an activation function, wherein the generator and the arbiter use different activation functions, a RELU function is set in the generator, an output layer of the generator is provided with a tanh function, and all layers in the arbiter use a LEAKRLU function.
10. The system of claim 9, wherein the generator and the arbiter have a loss function of:
wherein z represents an image to be processed, x represents a target image, G 1 (z i ) Representing a restored image generated by a first generator, G 2 (y i ) And m is the number of images, and i is the image number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210553812.3A CN114943655A (en) | 2022-05-19 | 2022-05-19 | Image restoration system for generating confrontation network structure based on cyclic depth convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210553812.3A CN114943655A (en) | 2022-05-19 | 2022-05-19 | Image restoration system for generating confrontation network structure based on cyclic depth convolution |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114943655A true CN114943655A (en) | 2022-08-26 |
Family
ID=82909581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210553812.3A Pending CN114943655A (en) | 2022-05-19 | 2022-05-19 | Image restoration system for generating confrontation network structure based on cyclic depth convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114943655A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116244458A (en) * | 2022-12-16 | 2023-06-09 | 北京理工大学 | Method for generating training, generating sample pair, searching model training and trademark searching |
-
2022
- 2022-05-19 CN CN202210553812.3A patent/CN114943655A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116244458A (en) * | 2022-12-16 | 2023-06-09 | 北京理工大学 | Method for generating training, generating sample pair, searching model training and trademark searching |
CN116244458B (en) * | 2022-12-16 | 2023-08-25 | 北京理工大学 | Method for generating training, generating sample pair, searching model training and trademark searching |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109410239B (en) | Text image super-resolution reconstruction method based on condition generation countermeasure network | |
CN110532897B (en) | Method and device for recognizing image of part | |
CN112419184B (en) | Spatial attention map image denoising method integrating local information and global information | |
CN111696110B (en) | Scene segmentation method and system | |
CN113076957A (en) | RGB-D image saliency target detection method based on cross-modal feature fusion | |
CN112419191B (en) | Image motion blur removing method based on convolution neural network | |
CN111627038B (en) | Background removing method, device and equipment and readable storage medium | |
CN113052775B (en) | Image shadow removing method and device | |
CN114723630A (en) | Image deblurring method and system based on cavity double-residual multi-scale depth network | |
CN114266894A (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN117333398A (en) | Multi-scale image denoising method and device based on self-supervision | |
CN113962905B (en) | Single image rain removing method based on multi-stage characteristic complementary network | |
CN115293968A (en) | Super-light-weight high-efficiency single-image super-resolution method | |
CN114943655A (en) | Image restoration system for generating confrontation network structure based on cyclic depth convolution | |
CN110120009B (en) | Background blurring implementation method based on salient object detection and depth estimation algorithm | |
CN115331083B (en) | Image rain removing method and system based on gradual dense feature fusion rain removing network | |
CN111489306A (en) | Image denoising method based on reinforcement learning | |
CN114862699B (en) | Face repairing method, device and storage medium based on generation countermeasure network | |
CN115358952A (en) | Image enhancement method, system, equipment and storage medium based on meta-learning | |
CN114187174A (en) | Image super-resolution reconstruction method based on multi-scale residual error feature fusion | |
CN115170812A (en) | Image denoising model training and denoising method, device and storage medium thereof | |
CN114494284A (en) | Scene analysis model and method based on explicit supervision area relation | |
CN112308772A (en) | Super-resolution reconstruction method based on deep learning local and non-local information | |
CN115631115B (en) | Dynamic image restoration method based on recursion transform | |
CN113989511B (en) | Image semantic segmentation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |