CN114612316A - Method and device for removing rain from nuclear prediction network image - Google Patents

Method and device for removing rain from nuclear prediction network image Download PDF

Info

Publication number
CN114612316A
CN114612316A CN202210109989.4A CN202210109989A CN114612316A CN 114612316 A CN114612316 A CN 114612316A CN 202210109989 A CN202210109989 A CN 202210109989A CN 114612316 A CN114612316 A CN 114612316A
Authority
CN
China
Prior art keywords
image
layer
module
rain
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210109989.4A
Other languages
Chinese (zh)
Inventor
张文君
曹玥
杨建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202210109989.4A priority Critical patent/CN114612316A/en
Publication of CN114612316A publication Critical patent/CN114612316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of image processing, and discloses a method and a device for removing rain of a kernel prediction network image. Compared with other single image rain removing methods based on the nuclear prediction network, the method optimizes the scale of the nuclear prediction network, uses a simpler network structure to realize rapid operation speed and obtains better effect, adapts to rapid preprocessing work of acquired data, and can be directly deployed in equipment for real-time acquisition and rain removal in rain.

Description

Method and device for removing rain from nuclear prediction network image
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for removing rain of a nuclear prediction network image.
Background
Data collected by an outdoor optical camera in rainy days are often low in visibility, raindrops in the data can randomly cover original information of images, and the preprocessing effect of an algorithm on the raindrops influences the performance of a downstream computer vision task. Due to the random size and density of raindrops in the raining data, the algorithm needs to have good scale property to adapt to raining conditions with different strengths. The original background details are not destroyed when the raindrops are removed and the image is restored, so that the algorithm has strong distinguishing capability on the raindrops and the background. Therefore, the rain removing algorithm has higher requirements on the scale and the accuracy of the network.
In the rain removal algorithm for a single image, the kernel prediction network is used for the rain removal task by patent CN113240612A, namely article EfficientDerain. The kernel prediction network does not directly synthesize image pixels, but performs convolution processing on an input image by predicting a denoising kernel through the network, so as to obtain a corresponding output target image. The algorithm adds an expansion filtering design (namely, filtering kernels with different sizes are generated) to the kernel prediction network, the processing quality of the algorithm on the Rain100H and SPA data sets is basically consistent with that of RCDNet, and the algorithm has the main advantage that the speed is greatly improved. However, the design structure of the original denoising kernel prediction network is directly used, the filtering kernels with different sizes are only output at the last layer, and corresponding adjustment is not carried out according to the characteristics of the rain removing task. And the design of expansion filtering does not effectively extract multi-scale features per se, and the PSNR and SSIM indexes on the Rain1400H and the Raindrop data sets have a certain distance from the RCDNet and the DeRaindrop.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a method and a device for removing rain of a nuclear prediction network image, and the specific technical scheme is as follows:
a nuclear prediction network image rain removing method comprises the following steps:
the method comprises the following steps that firstly, a coding-decoding network is used, a multi-scale module and an attention mechanism are fused to construct a multi-scale attention core network, and the multi-scale attention core network comprises a coding module, a multi-scale module, an attention module, a decoding module and a processing module;
the coding module is used for coding and compressing the characteristic image, reducing the size of the input image and outputting the input image to the multi-scale module;
the multi-scale module extracts multi-scale features of the image;
the attention module extracts key region features of the image containing the multi-scale features;
the decoding module is used for decoding and outputting the image and restoring the image to the size of the original image;
the processing module is used for filtering the decoded and output image;
collecting or downloading a source data set, inputting the data set into a multi-scale attention core network for off-line training, and obtaining a rain-removing training model;
and step three, deploying a rain removing training model on an equipment platform needing preprocessing operation, inputting a single image needing rain removing into a rain removing network model to obtain a corresponding rain removing filter kernel, and performing convolution with the input image to obtain a rain removing image.
Furthermore, the first layer convolution layer of the coding module keeps the input characteristic diagram unchanged, and the number of channels is increased to 64; the second layer of convolution layer to the fourth layer of convolution layer firstly reduce the characteristic diagram to be half of the input size of the current layer through the pooling layer, then keep the size of the characteristic diagram unchanged through convolution, and increase the number of channels to be 2 times of the input size; the fifth convolutional layer performs the reduced feature map operation, but keeps the number of channels unchanged.
Furthermore, the multi-scale module is composed of a convolution layer, an ASPP layer and a pooling layer; the output of the coding module is respectively input into an ASPP layer and a pooling layer, the size of an output feature map of the ASPP layer is unchanged, the number of channels is quadrupled, the pooling layer obtains a feature map with the same size as the input feature map through global average pooling and upsampling, the ASPP layer result and the pooling layer result are fused through splicing operation, and then the channels of the pooling layer are reduced to be consistent with the output of the coding module through 1 × 1 convolution layer, so that multi-scale feature extraction is completed.
Further, the ASPP layer is formed by convolution of a 1 × 1 convolution layer with a step size of 1 and without padding and 3 × 3 holes, the hole rates of the convolution of the 3 × 3 holes are 6, 12 and 18 respectively, the step sizes are all 1, and the padding is 6, 12 and 18 respectively.
Further, the attention module carries out binarization processing on the output of the multi-scale module through a sigmoid function, the output after binarization processing and the output of the coding module carry out dot product operation, and a dot product result is fused with the output of the coding module.
Furthermore, the decoding module is composed of four convolutional layers, the output of the attention module is input to the decoding module, the four convolutional layers are all used for performing upsampling operation, and the resolution of the output feature map is twice of that of the input feature map, specifically: the first and second convolutional layers change the input channel to half of the original one, and the third convolutional layer changes the output channel to K2The fourth layer of convolution layer keeps the output channel at K2The feature map resolution is changed to the original size.
Further, the processing module decodes the K obtained by the module2And (3) reshaping the output of each channel into 1K-K linear filter, performing convolution on the linear filter and the image to obtain a final image, namely the rain-removing image, wherein the size of K is the size of a convolution kernel.
Further, the convolutional layer and the pooling layer in the encoding module and the decoding module are constructed as follows: the convolution layer is composed of three convolution functions with the size of 3 x 3, the step size of 1 and the padding of 1 and a ReLU activation function; the pooling layer consists of an average pooling of size 2 x 2 with step size 1.
Further, the offline training specifically comprises: the input image is processed by a K x K filtering kernel generated by a multi-scale attention kernel network to obtain a processed image
Figure BDA0003493126690000031
In contrast to the true-value image O,the minimum loss function continuously adjusts the network weight until the training is finished; the Loss function Loss is formed by combining a Manhattan distance L1 between image gradients and a Euclidean distance L2 of image brightness, and the specific calculation mode is as follows:
Figure BDA0003493126690000032
Figure BDA0003493126690000033
Figure BDA0003493126690000034
a nuclear prediction network image rain removing device comprises one or more processors and is used for realizing the nuclear prediction network image rain removing method.
The invention has the beneficial effects that:
the invention improves the nuclear prediction network in the rain removing method, simplifies the network structure, does not adopt multi-layer jump connection, but uses a multi-scale cooperation attention mode, increases the distinguishing degree of the network on raindrops with different scales and backgrounds, and can effectively improve the speed.
Drawings
FIG. 1 is a schematic diagram of an image de-raining process of the present invention;
FIG. 2 is a schematic diagram of the structure of the encoding module of the present invention;
FIG. 3 is a schematic diagram of a multi-scale and attention module network architecture of the present invention;
FIG. 4 is a schematic diagram of a network architecture of the decode module and process module of the present invention;
fig. 5 is a block diagram of a nuclear prediction network image rain removing device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and technical effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples.
The invention discloses a method and a device for removing rain from a nuclear prediction network image, wherein a codec network is used as an integral framework, a multi-scale module and an attention mechanism are fused to generate a rain removing kernel corresponding to an input image, the single raindrop image is filtered by using the kernel, raindrops with different sizes and directions in the image can be removed, a high-quality rain removing image can be quickly obtained, the multi-scale module with ASPP introduced by the network can adapt to raindrop noises with different intensities, the attention mechanism enables the background and raindrop to be more distinguishable, and the effect of a single image rain removing algorithm can be effectively improved. Compared with other single image rain removing methods based on the nuclear prediction network, the method optimizes the scale of the nuclear prediction network, uses a simpler network structure to realize rapid operation speed and obtains better effect, adapts to rapid preprocessing work of acquired data, and can be directly deployed in equipment for real-time acquisition and rain removal in rain.
Specifically, the method for removing rain from the nuclear prediction network image comprises the following steps:
step one, a multi-scale attention core network is constructed by using a codec network and fusing a multi-scale module and an attention mechanism.
Specifically, the multi-scale attention core network comprises an encoding module, a multi-scale module, an attention module, a decoding module and a processing module, wherein the encoding module and the decoding module do not use jump connection of an original core prediction network, but use the multi-scale module and the attention module to extract multi-scale and key region characteristics.
The encoding module can be simplified into five convolutional layers as shown in fig. 2, wherein the first convolutional layer keeps the characteristic diagram unchanged, and the number of channels is increased to 64; the second layer of convolution layer to the fourth layer of convolution layer firstly reduce the characteristic diagram to be half of the input size of the current layer through the pooling layer, then keep the size of the characteristic diagram unchanged through convolution, and increase the number of channels to be 2 times of the input size; the fifth convolutional layer still performs the reduced feature map operation, but keeps the number of channels unchanged. Taking the input image resolution size of 512 × 384 as an example, the feature map resolution size output after passing through the encoding module is 32 × 24, and the number of channels is 512.
As shown in fig. 3, the multi-scale module is composed of a convolutional layer, an ASPP layer (void space convolutional pooling pyramid), a pooling layer, and the like; the output of the coding module is respectively input into an ASPP layer and a pooling layer, the size of an output feature map of the ASPP layer is unchanged, the number of channels is quadrupled, the pooling layer obtains a feature map with the same size as the input feature map through global average pooling and upsampling, the ASPP layer result and the pooling layer result are fused through splicing operation, and then the channels of the pooling layer are reduced to be consistent with the output of the coding module through 1 × 1 convolution layer, so that multi-scale feature extraction is completed.
The ASPP layer is composed of a 1 × 1 convolution layer with a step size of 1 and no padding (pixel filling) and 3 × 3 hole convolutions, the hole rates of the 3 × 3 hole convolutions are 6, 12 and 18, respectively, the step sizes are 1, and the padding is 6, 12 and 18, respectively.
And the attention module carries out binarization processing on the output of the multi-scale module through a sigmoid function, and then carries out dot product operation on the output of the multi-scale module and the output of the coding module, so that a dot product result is fused with the output of the coding module. After the processing of the attention module, the features input into the decoding module already comprise multi-scale distinguishing features, which is helpful for improving the differentiation features related to the network extraction task.
As shown in fig. 4, the decoding module is composed of four convolutional layers, the output of the attention module is input to the decoding module, the four convolutional layers are all used for performing upsampling, and the resolution of the output feature map is twice that of the input feature map, specifically: the first and second convolutional layers change the input channel to half of the original one, and the third convolutional layer changes the output channel to K2The fourth layer of convolution layer keeps the output channel at K2The feature map resolution is changed to the original size.
The convolution layer and the pooling layer in the encoding module and the decoding module are constructed as follows: the convolution layer is composed of three convolution functions with the size of 3 x 3, the step size of 1 and the padding of 1 and a ReLU activation function; the pooling layer consists of an average pooling of size 2 x 2 with step size 1.
The processing module decodes the K obtained by the module2And (3) reshaping each channel output into 1K × K linear filter, and performing convolution on the linear filter and the image to obtain a final image, namely a rain removing image, wherein the size of K is the size of a convolution kernel.
Step two, constructing a training data set by opening source data sets such as SPA, Rain1400H, Rain100H and the like, wherein the Rain1400H is taken as an example in the embodiment, the data set includes 1000 original images, and 14 Rain images with different stripe directions and sizes are generated based on the original images, and 14000 Rain drop images and 1000 original images are obtained in total. In the example, 900 images were randomly selected for the training set, and the remaining images were used for the test set.
As shown in fig. 1, on a server platform supporting deep learning training, a data set is input into a multi-scale attention core network, and network parameters are continuously adjusted through a loss function until offline training is completed to obtain a rain-removing network model.
The off-line training specifically comprises: the input image is processed by a K x K filtering kernel generated by a multi-scale attention kernel network to obtain a processed image
Figure BDA0003493126690000061
And comparing the network weight with the true value image O, and continuously adjusting the network weight by the minimum loss function until the training is finished.
The Loss function Loss is formed by combining a Manhattan distance L1 between image gradients and a Euclidean distance L2 of image brightness, and the specific calculation mode is as follows:
Figure BDA0003493126690000062
Figure BDA0003493126690000063
Figure BDA0003493126690000064
and step three, deploying the model on an equipment platform needing preprocessing operation, inputting a single image needing rain removal into a rain removal network model to obtain a corresponding rain removal filtering kernel, and performing convolution with the input image to obtain a rain removal image.
Corresponding to the embodiment of the rain removing method for the nuclear prediction network image, the invention also provides an embodiment of a rain removing device for the nuclear prediction network image.
The nuclear prediction network image rain removing device provided by the embodiment of the invention comprises one or more processors and is used for realizing the nuclear prediction network image rain removing method in the embodiment.
Specifically, referring to fig. 5, the device is provided with a board card device supporting loading of a deep learning model, and the board card device includes an IO interface such as a memory, a processor, and an image acquisition module, and can acquire or import an image and output the image after real-time processing on a board card. The training network is deployed on a hardware board card and used for realizing the method for removing rain of the nuclear prediction network image.
The embodiment of the nuclear prediction network image rain removing device can be applied to any equipment with data processing capability, such as computers and other equipment or devices. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 5, a hardware structure diagram of an arbitrary device with data processing capability where a core prediction network image rain removing device is located according to the present invention is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 5, in an embodiment, the arbitrary device with data processing capability where the device is located may also include other hardware according to an actual function of the arbitrary device with data processing capability, which is not described again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
An embodiment of the present invention further provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the method for removing rain in a nuclear prediction network image in the foregoing embodiments is implemented.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be an external storage device of the wind turbine, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), and the like, provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing-capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way. Although the foregoing has described the practice of the present invention in detail, it will be apparent to those skilled in the art that modifications may be made to the practice of the invention as described in the foregoing examples, or that certain features may be substituted in the practice of the invention. All changes, equivalents and modifications which come within the spirit and scope of the invention are desired to be protected.

Claims (10)

1. A nuclear prediction network image rain removing method is characterized by comprising the following steps:
the method comprises the following steps that firstly, a coding-decoding network is used, a multi-scale module and an attention mechanism are fused to construct a multi-scale attention core network, and the multi-scale attention core network comprises a coding module, a multi-scale module, an attention module, a decoding module and a processing module;
the coding module is used for coding and compressing the characteristic image, reducing the size of the input image and outputting the input image to the multi-scale module;
the multi-scale module extracts multi-scale features of the image;
the attention module extracts key region features of the image containing the multi-scale features;
the decoding module is used for decoding and outputting the image and restoring the image to the size of the original image;
the processing module is used for filtering the decoded and output image;
collecting or downloading a source data set, inputting the data set into a multi-scale attention core network for off-line training, and obtaining a rain-removing training model;
and step three, deploying a rain removing training model on an equipment platform needing preprocessing operation, inputting a single image needing rain removing into a rain removing network model to obtain a corresponding rain removing filter kernel, and performing convolution with the input image to obtain a rain removing image.
2. The method according to claim 1, wherein the first layer convolution layer of the coding module keeps the input feature map unchanged, and the number of channels is increased to 64; the second layer of convolution layer to the fourth layer of convolution layer firstly reduce the characteristic diagram to be half of the input size of the current layer through the pooling layer, then keep the size of the characteristic diagram unchanged through convolution, and increase the number of channels to be 2 times of the input size; the fifth convolutional layer performs the reduced feature map operation, but keeps the number of channels unchanged.
3. The method for rain removal of nuclear prediction network images as claimed in claim 1, wherein the multi-scale module is composed of a convolutional layer, an ASPP layer and a pooling layer; the output of the coding module is respectively input into an ASPP layer and a pooling layer, the size of an output feature map of the ASPP layer is unchanged, the number of channels is quadrupled, the pooling layer obtains a feature map with the same size as the input feature map through global average pooling and upsampling, the ASPP layer result and the pooling layer result are fused through splicing operation, and then the channels of the pooling layer are reduced to be consistent with the output of the coding module through 1 × 1 convolution layer, so that multi-scale feature extraction is completed.
4. The method as claimed in claim 3, wherein the ASPP layer is formed by convolution of 1 × 1 convolution layer with step size of 1 and without padding and convolution of 3 × 3 holes, the convolution of 3 × 3 holes has a hole rate of 6, 12, 18 respectively, the step size is 1, and the padding is 6, 12, 18 respectively.
5. The method as claimed in claim 1, wherein the attention module performs binarization processing on the output of the multi-scale module through a sigmoid function, and performs dot product operation on the output after binarization processing and the output of the coding module to fuse a dot product result and the output of the coding module.
6. The method of claim 1, wherein the decoding module is implemented by a four-layer volumeThe packed layer is formed, the output and input decoding module of the attention module performs up-sampling operation through four packed layers, and the resolution of the output characteristic diagram is twice of that of the input characteristic diagram, specifically: the first and second convolutional layers change the input channel to half of the original one, and the third convolutional layer changes the output channel to K2The fourth layer of convolution layer keeps the output channel at K2The feature map resolution is changed to the original size.
7. The method as claimed in claim 6, wherein the processing module decodes K obtained from the decoding module2And (3) reshaping the output of each channel into 1K-K linear filter, performing convolution on the linear filter and the image to obtain a final image, namely the rain-removing image, wherein the size of K is the size of a convolution kernel.
8. The method of claim 1, wherein the convolutional layer and the pooling layer in the encoding module and the decoding module are constructed as follows: the convolution layer is composed of three convolution functions with the size of 3 x 3, the step size of 1 and the padding of 1 and a ReLU activation function; the pooling layer consists of one average pooling of size 2 x 2 with step size 1.
9. The method for removing rain from a nuclear prediction network image according to claim 1, wherein the offline training specifically comprises: the input image is processed by a K x K filtering kernel generated by a multi-scale attention kernel network to obtain a processed image
Figure FDA0003493126680000021
Comparing with a true value image O, and continuously adjusting the network weight by a minimum loss function until training is completed; the Loss function Loss is formed by combining a Manhattan distance L1 between image gradients and a Euclidean distance L2 of image brightness, and the specific calculation mode is as follows:
Figure FDA0003493126680000022
Figure FDA0003493126680000023
Figure FDA0003493126680000024
10. a nuclear prediction network image rain removal device, comprising one or more processors configured to implement a nuclear prediction network image rain removal method according to any one of claims 1 to 0.
CN202210109989.4A 2022-01-28 2022-01-28 Method and device for removing rain from nuclear prediction network image Pending CN114612316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210109989.4A CN114612316A (en) 2022-01-28 2022-01-28 Method and device for removing rain from nuclear prediction network image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210109989.4A CN114612316A (en) 2022-01-28 2022-01-28 Method and device for removing rain from nuclear prediction network image

Publications (1)

Publication Number Publication Date
CN114612316A true CN114612316A (en) 2022-06-10

Family

ID=81859415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210109989.4A Pending CN114612316A (en) 2022-01-28 2022-01-28 Method and device for removing rain from nuclear prediction network image

Country Status (1)

Country Link
CN (1) CN114612316A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115659176A (en) * 2022-10-14 2023-01-31 湖南大学 Training method of intelligent contract vulnerability detection model and related equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115659176A (en) * 2022-10-14 2023-01-31 湖南大学 Training method of intelligent contract vulnerability detection model and related equipment

Similar Documents

Publication Publication Date Title
CN111311629A (en) Image processing method, image processing device and equipment
CN110599401A (en) Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
RU2706891C1 (en) Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts
CN111369565A (en) Digital pathological image segmentation and classification method based on graph convolution network
CN111091503A (en) Image out-of-focus blur removing method based on deep learning
CN117058007A (en) Object class repair in digital images using class-specific repair neural networks
CN115082966A (en) Pedestrian re-recognition model training method, pedestrian re-recognition method, device and equipment
CN114612316A (en) Method and device for removing rain from nuclear prediction network image
Liu et al. Facial image inpainting using multi-level generative network
CN115082306A (en) Image super-resolution method based on blueprint separable residual error network
CN114155165A (en) Image defogging method based on semi-supervision
CN116912148B (en) Image enhancement method, device, computer equipment and computer readable storage medium
CN113298931A (en) Reconstruction method and device of object model, terminal equipment and storage medium
CN117173021A (en) Video processing method and device
CN115272131B (en) Image mole pattern removing system and method based on self-adaptive multispectral coding
CN116434303A (en) Facial expression capturing method, device and medium based on multi-scale feature fusion
CN113727050B (en) Video super-resolution processing method and device for mobile equipment and storage medium
CN115423697A (en) Image restoration method, terminal and computer storage medium
CN115719297A (en) Visible watermark removing method, system, equipment and medium based on high-dimensional space decoupling
CN113744250A (en) Method, system, medium and device for segmenting brachial plexus ultrasonic image based on U-Net
CN113516240A (en) Neural network structured progressive pruning method and system
CN116309274B (en) Method and device for detecting small target in image, computer equipment and storage medium
CN112329925B (en) Model generation method, feature extraction method, device and electronic equipment
CN116402697A (en) Single image defogging method and device based on image prior and global block aggregation
CN116071478B (en) Training method of image reconstruction model and virtual scene rendering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination