CN113628129B - Edge attention single image shadow removing method based on semi-supervised learning - Google Patents

Edge attention single image shadow removing method based on semi-supervised learning Download PDF

Info

Publication number
CN113628129B
CN113628129B CN202110812986.2A CN202110812986A CN113628129B CN 113628129 B CN113628129 B CN 113628129B CN 202110812986 A CN202110812986 A CN 202110812986A CN 113628129 B CN113628129 B CN 113628129B
Authority
CN
China
Prior art keywords
shadow
image
network
elimination
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110812986.2A
Other languages
Chinese (zh)
Other versions
CN113628129A (en
Inventor
肖春霞
朱云
罗飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110812986.2A priority Critical patent/CN113628129B/en
Publication of CN113628129A publication Critical patent/CN113628129A/en
Application granted granted Critical
Publication of CN113628129B publication Critical patent/CN113628129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of shadow elimination in image processing, and provides a method for shadow elimination of a single image based on semi-supervised edge attention. The method comprises two parts of a generator and a discriminator, wherein the generator is divided into a shadow detection network, an edge attention module and a shadow elimination network. According to the invention, through training of the semi-supervised learning network, the shadow area of the shadow image of the complex scene can be detected, and the shadow image is guided to be subjected to shadow elimination, so that a better shadow eliminated image is obtained.

Description

Edge attention single image shadow removing method based on semi-supervised learning
Technical Field
The invention relates to a method for removing shadows of a single image by using edge attention based on semi-supervised learning, in particular to a method for removing complex shadows in a real scene. The invention belongs to the field of image illumination editing, and particularly relates to a shadow removing method based on semi-supervised learning.
Background
Currently, the common shadow removal methods can be mainly divided into the following two types: 1. the illumination intensity of the image is analyzed group-wise by a physical model based on conventional physical methods, such as the shadow removal method proposed in the paper "Single-image shadow detection and removal using paired regions". The method can obtain good shadow removing effect on the premise of a certain hypothesis, but the generalization capability of the method is poor because the method is very dependent on the acquisition of priori knowledge and a series of related hypotheses, most of data outside the hypotheses cannot be processed better, and partial artifacts exist in the common result; 2. the method based on deep learning can better overcome the problems that the traditional physical method depends on a large number of assumptions and artifacts exist in processing results, for example, the shadow removing method based on the countermeasure generation network, which is proposed in paper RIS-GAN: explore Residual and Illumination with Generative Adversarial Networks for Shadow Removal, has a certain progress through the supervised learning of a large amount of data, but the method still has serious problems such as distortion of the color of the processing results and incomplete shadow removal, and is difficult to meet the requirements in practical application. In the prior art, a single-image shadow removing method which has strong generalization capability and can meet the requirements of users and is practically and effectively used in the real world is still lacking.
Disclosure of Invention
The invention provides a shadow detection and elimination network based on semi-supervised learning, and aims to solve the problems that an existing shadow elimination method is poor in effect of eliminating shadows of a real complex scene, obvious in shadow boundary, and difficult to meet actual application requirements due to color distortion of a processing result.
The shadow detection and elimination network based on semi-supervised learning comprises a shadow detection network module with edge attention and a depth shadow elimination network module. In a shadow detection network module with edge attention, a feature extractor for extracting feature representations of different scales of an image, a main decoder for supervised learning and a plurality of sub-decoders (one Dropout layer more than the main decoder) for unsupervised learning are included, and a feature map and an edge attention map are downsampled as input data of the main decoder. The module is used for detecting shadow areas in the image; the feature extractor adopts a downsampling structure of a ResNeXt-101 network to extract downsampled image features; the depth shadow eliminating module comprises a U-Net network module and a depth feature fusion module, wherein the U-Net network module firstly extracts image features, and then carries out depth feature fusion under the guidance of a shadow mask image output by a detection network to remove shadows, so as to obtain an image after shadow elimination. The shadow detection network module and the depth shadow elimination network module share encoders, namely the network structures are the same and parameters are shared. The encoder is a feature extractor, and the common encoder can reduce network complexity and network parameters, thereby reducing network training time.
According to the edge attention image shadow detection and elimination based on semi-supervised learning, before shadow removal of a single image is carried out, firstly, a shadow mask image in a data set is required to be used for manufacturing a shadow edge image; when the shadow of a single image is removed after the data is produced, firstly, shadow detection is carried out on the image, and the shadow removal of the image is guided according to the detected shadow region of the image, so that the illumination of the shadow region in the image can be better recovered.
After the model is trained, when shadow detection is carried out, the shadow picture is subjected to image enhancement so that the illumination intensity of the shadow image is stronger.
When the network model used in the invention is used for eliminating the shadow, an original input shadow image and a shadow mask image obtained by shadow detection are taken as input and put into a shadow elimination network, the input image is downsampled to obtain feature images under different scales, the feature images under different scales are subjected to shadow elimination under the guidance of the shadow mask image, the obtained shadow elimination feature images under different scales are subjected to feature fusion to obtain the feature after the shadow elimination, and the feature after the shadow elimination is overlapped with the input original image to obtain the final result after the shadow elimination.
The network is also added with a discriminator, the input of the discriminator is a shadow removal image obtained by a shadow removal network, wherein the image output by the shadow removal network passes through five convolution blocks, each convolution block comprises a convolution layer, a regularization layer and a LeakyReLU activation layer, the probability of the true and false of the image is output through a softMax layer again, the true and false condition of the shadow removal image is judged according to a set value, and the effect of strengthening the generalization capability of the network is achieved.
The method comprises the following specific processes:
step S1: using paired data sets or published data sets similar to the application scene, wherein the used data sets comprise original shadow images and shadow-free images; step S2: the shadow detection and elimination network based on semi-supervised learning of the present invention is supervised trained using the data set used and produced in step S1.
Step S3: after the model training in the step S2 is completed, when shadow detection is carried out, image enhancement is carried out on the shadow picture so as to enable the illumination intensity of the shadow image to be stronger, and therefore network detection is easier.
Step S4: and (3) taking the input shadow image and the shadow region mask image obtained in the step (S3) as input, putting the input shadow image and the shadow region mask image into a shadow elimination network, downsampling the input image to obtain feature images under different scales, and performing shadow elimination on the feature images under different scales under the guidance of the shadow mask image.
Step S5: and (3) carrying out feature fusion on the shadow elimination feature images under different scales obtained in the step (S4) to obtain features after shadow elimination, and then superposing the features with the input original image to obtain a final shadow elimination result.
The invention has the advantages that:
1. in the shadow detection stage, an additional unsupervised sample is added for training while monitoring training, so that the network can learn additional shadow detection information which is not available in some monitoring data, and the robustness of the detection capacity care model of the network is enhanced;
2. in the shadow elimination stage, firstly, the convolution sequences of different groups are subjected to group convolution to reduce the intra-class variance, enhance the expressive power of the characteristics in the group, then the shadows in the group are eliminated respectively, and then the characteristic fusion is carried out on the characteristic diagrams of each group after the shadow elimination, so as to obtain the final shadow elimination characteristic.
Drawings
Fig. 1 is a schematic diagram of an overall network for shadow detection and elimination in the present embodiment.
Fig. 2 is a schematic diagram of a network for making a training shadow detection module according to the present embodiment.
As shown in fig. 3, a schematic diagram of an edge attention module network in the shadow detection network according to the present embodiment is shown.
Fig. 4 is a network schematic diagram of the shadow elimination module according to the present embodiment.
Fig. 5 is a network schematic diagram of the image shading post-elimination discriminator in the present embodiment.
Detailed Description
The objects, technical solutions and advantages of the present invention will be more apparent from the following detailed description of the present invention with reference to the accompanying drawings and examples. It is to be understood that the present invention is herein merely illustrative and not restrictive.
Example 1
The embodiment of the image shadow detection and elimination method based on semi-supervised learning can well realize shadow elimination of shadow areas in complex scene images.
Fig. 1 is a schematic diagram of a network for detecting and eliminating shadows of an image according to the present embodiment. Randomly inputting an image with shadow, performing downsampling by an encoder to obtain input characteristics of an attention network, and generating shadow detection diagrams of the input image by the characteristics under different scales through an attention module and a decoder; the shared encoder directly decodes through the decoder in the shadow elimination network, then carries out dot product operation with the shadow mask image output by the shadow detection network, obtains the final shadow elimination feature through a depth feature fusion elimination network, and finally carries out superposition with the original image to obtain the image after shadow elimination. The specific process is that the characteristics in the groups are fused firstly in a group convolution mode, and then the output characteristics in different groups are subjected to addition operation to perform characteristic superposition.
As shown in fig. 2, this is a shadow detection module in this embodiment. The module comprises a feature extractor, an attention module and a plurality of decoders. In this embodiment, the encoder performs downsampling on an input image at different scales to obtain feature maps at different scales; wherein each feature map generates edge features of the shadow mask by an edge attention module, providing edge feature information to the primary decoder. The bottommost feature generates a shadow mask map for training of the supervised partial network under the direction of the edge features by the main decoder. In the unsupervised part of the module, the bottommost features are input to a plurality of decoders after passing through a random Dropout layer, and a plurality of shadow mask graphs are obtained after up-sampling for a plurality of times, and a joint consistency loss is carried out between the shadow mask graphs to restrict the training of the network. The encoder in the present invention is a feature extractor.
As shown in fig. 3, an edge attention network diagram in the shadow detection network in this embodiment is shown. The method comprises the steps of inputting first layer features and bottommost layer features in an encoder, obtaining a feature map with the same size as the first layer features through a convolution layer of 1x1 and an up-sampling layer, overlapping the feature map with the first layer features, obtaining shadow edge attention patterns through a deconvolution operation of 2x2, a convolution operation of 3x3, a convolution operation of 1x1 and a Sigmoid layer output, and performing supervised learning on edge regions of shadow regions.
As shown in fig. 4, a network diagram of the shadow elimination module in this embodiment is shown. The shadow image is generated into a feature image through a U-Net network, then the feature image and the shadow mask image are subjected to dot product input into a depth feature fusion module to obtain a shadow removal feature image, and finally the shadow removal feature image and the shadow image are overlapped to obtain a shadow removal image.
Fig. 5 is a network schematic diagram of the arbiter in this embodiment. The input of the method is a shadow removing image obtained by a shadow removing network, wherein the image output by the shadow removing network passes through five convolution blocks, each convolution block comprises a convolution layer, a regularization layer and a LeakyReLU activation layer, the probability of the image true or false is output through a softMax layer once again, the true or false condition of the shadow removing image is judged, and when the output probability is less than 0.5, the judgment is false, namely '0', the effect of image shadow removing is not ideal, and the network requirement is not met. When the probability of output is >0.5, it is judged as true, namely "1", which indicates that the image shading eliminating effect is ideal. The arbiter in the present network acts as part of the training network to strengthen the generalization ability of the network. The shadow detection and elimination network is an end-to-end single image shadow elimination network, and is divided into two parts to realize the removal of image shadows. In the image shadow elimination in the present embodiment, in the first portion, a shadow region in an image is detected with a shadow image as an input to the network, and a shadow mask map is obtained. And in the second part, taking the shadow mask diagram detected by the first part of the shadow image as input, and removing the shadow region in the shadow image under the guidance of the shadow mask diagram to obtain the image after shadow removal.
Referring to fig. 1, according to the shadow elimination network in the present embodiment, the present embodiment provides a shadow elimination method of a single image, which includes the steps of:
step S1: the SRD dataset disclosed in the paper "Deshadownet: a multi-context embedding deep network for shadow removal" is used as the dataset required to train the shadow detection network. The specific production process is shown in fig. 2, and a pair of image pairs comprising shadow images and corresponding shadow mask patterns are randomly selected from the SRD dataset and input into a shadow detection network. And (3) making edge supervision data of the shadow image by expanding surrounding pixels inwards and outwards respectively as boundary parts of the shadow region and the non-shadow region on the edge of the shadow mask image in the data set.
Step S2: training the shadow detection and elimination network using the data set used and made in step S1;
step S3: the shadow detection network is trained by semi-supervised learning, a basic shadow detection model is trained by the supervised learning part, and the shadow detection capability of the network model is further generalized and improved by the unsupervised learning part. In the step, firstly, a ResNeXt-101 network is used as a downsampled backbone network to downsample an input image to obtain a feature map under different scales, after context feature fusion is carried out on the highest dimensional feature and the lowest dimensional feature, the edge information of a shadow area in a shadow image is obtained through an edge attention module, the module uses edge loss to monitor the boundary of the shadow area and a non-shadow area, and a specific formula of a loss function is as follows:
wherein the method comprises the steps ofAnd respectively detecting a shadow edge graph output by the network and a corresponding real shadow edge graph.
The supervised learning section performs constraint between the shadow mask map obtained by the output of the main decoder and the real shadow mask map in the dataset, and the specific formula of the loss function is as follows:
wherein the method comprises the steps ofRespectively representing the shadow mask map output by the shadow detection network and the corresponding real shadow mask map.
The loss function in this section can be expressed as:
L d =L m +L b
wherein L is d ,L m ,L b Representing the loss of the shadow detection network, shadow mask detection loss and shadow edge detection loss, respectively.
Step S4: training a shadow elimination network, obtaining a characteristic image of the shadow image by an original shadow image through a U-Net network, guiding and eliminating the characteristic image of the shadow image by using the shadow mask image obtained by the output of the step S3 as guiding information, and finally overlapping the characteristic image with the original image after feature fusion to obtain the image with the shadow removed. In the partial network, the training of the network is constrained by using the non-shadow image corresponding to the shadow image in the data set, and the specific formula is as follows:
wherein the method comprises the steps ofRespectively representing the image output by the shadow elimination network and the corresponding real non-shadow image.
In steps S3 and S4 of this embodiment, the size preprocessing operation is performed on the images input by the network, and the sizes of all the images input by the network are 256×256 pixels; in step S4, the input shadow mask map is further normalized, and the specific calculation formula is as follows:
wherein Image is i,j Representing the pixel value of the input image at the (i, j) position.
The embodiment provides an edge attention shadow detection and removal method based on semi-supervised learning, which utilizes a semi-supervised learning strategy to learn shadow detection of shadow images of a complex scene, and utilizes an edge attention mechanism to learn small shadow areas and soft shadow edge areas in the complex scene in a targeted manner; and then, guiding the shadow image to carry out a shadow elimination task by using the detected shadow mask graph to obtain a shadow-removed image. The method solves the problem that shadows of the current complex scene are difficult to detect and remove, especially in small shadow areas and soft shadow areas.

Claims (6)

1. An edge attention image shadow detection and elimination method based on semi-supervised learning is characterized in that: the method comprises the steps of detecting the shadow of an image based on semi-supervision and eliminating the shadow of the image based on supervised learning, and firstly detecting the shadow and then eliminating the shadow;
the method has the advantages that based on semi-supervised image shadow detection, an additional non-supervised sample is added during supervision training, and training is carried out, so that a network can learn additional shadow detection information which is not available in some supervision data, and the detection capability of the network and the robustness of a model are enhanced;
the semi-supervised image shadow detection is carried out through a feature extractor, a main decoder for supervision and a plurality of auxiliary decoders for non-supervision to obtain a shadow mask map, and the feature map and the edge attention map are taken as input data of the main decoder;
in the training of semi-supervised learning of the shadow detection network, a basic shadow detection model is trained by a supervised learning part, and the shadow detection capability of the network model is further generalized and improved by an unsupervised learning part; the specific process is as follows:
firstly, utilizing a ResNeXt-101 network as a downsampled backbone network to downsample an input image to obtain feature images under different scales, carrying out context feature fusion on the highest dimensional features and the lowest micro features, and then obtaining edge information of a shadow region in a shadow image through an edge attention module, wherein the module monitors the boundary of the shadow region and a non-shadow region by using edge loss; the supervised learning part carries out constraint between the obtained shadow mask graph and the real shadow mask graph in the dataset through the output of the main decoder;
firstly, carrying out group convolution on convolution sequences of different groups based on supervised learning to reduce intra-class variance, enhance the expressive power of intra-group features, respectively eliminating the intra-group shadows, and then carrying out feature fusion on each group of feature graphs after shadow elimination to obtain final shadow elimination features, wherein the specific process is as follows:
after the model is trained, when shadow detection is carried out, the shadow picture is subjected to image enhancement to make the illumination intensity of the shadow image stronger;
when the network model is used for shadow elimination, an original input shadow image and a shadow mask image obtained by shadow detection are taken as input and put into a shadow elimination network, the input image is subjected to downsampling to obtain feature images under different scales, the feature images under different scales are subjected to shadow elimination under the guidance of the shadow mask image, the obtained shadow elimination feature images under different scales are subjected to feature fusion to obtain characteristics after shadow elimination, and the characteristics after shadow elimination are overlapped with the input original image to obtain a final shadow elimination result.
2. The method according to claim 1, characterized in that: the image shadow elimination based on supervised learning is realized through a U-Net network module and a depth feature fusion network, firstly, image features are extracted through the U-Net network, then, the depth feature fusion is carried out under the guidance of a shadow mask image output by a detection network to remove shadows, and finally, an image after shadow elimination is obtained.
3. The method according to claim 1, characterized in that: the network training adopts paired data sets or public data sets with similar scenes.
4. The method according to claim 2, characterized in that: when model training is carried out, a discriminator is further added into the network, the input of the discriminator is a shadow removing image obtained by a shadow removing network, wherein the image output by the shadow removing network passes through five convolution blocks, each convolution block comprises a convolution layer, a regularization layer and a LeakyReLU activation layer, the probability of the true and false of the image is output through a softMax layer again, the true and false condition of the shadow removing image is judged according to a set value, and the effect of strengthening the generalization capability of the network is achieved.
5. The method according to claim 1, characterized in that: the specific formula of edge loss is:
wherein the method comprises the steps ofShadow edge graphs output by the shadow detection network and corresponding real shadow edge graphs are respectively detected;
the specific formula of the supervised learning section loss function is as follows:
wherein the method comprises the steps ofRespectively representing the shadow mask map output by the shadow detection network and the corresponding real shadow mask map,
the loss function in this section can be expressed as:
L d =L m +L b
wherein L is d ,L m ,L b Representing the loss of the shadow detection network, shadow mask detection loss and shadow edge detection loss, respectively.
6. The method according to claim 1, characterized in that: training a shadow elimination network, obtaining a characteristic image of a shadow image from an original shadow image through a U-Net network, guiding and eliminating the characteristic image of the shadow image by using the shadow mask image as guiding information, and finally superposing the characteristic image with the original image after characteristic fusion to obtain a shadow-removed image, wherein in the partial network, training of the network is constrained by using non-shadow images corresponding to the shadow images in a data set, and the specific formula is as follows:
wherein the method comprises the steps ofRespectively representing the image output by the shadow elimination network and the corresponding real non-shadow image,
in the network use process, the size preprocessing operation is carried out on the images input by the network, and the sizes of the images input by the network are 256x256 pixels; the input shadow mask graph is normalized, and the specific calculation formula is as follows:
wherein Image is i,j Representing the pixel value of the input image at the (i, j) position.
CN202110812986.2A 2021-07-19 2021-07-19 Edge attention single image shadow removing method based on semi-supervised learning Active CN113628129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110812986.2A CN113628129B (en) 2021-07-19 2021-07-19 Edge attention single image shadow removing method based on semi-supervised learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110812986.2A CN113628129B (en) 2021-07-19 2021-07-19 Edge attention single image shadow removing method based on semi-supervised learning

Publications (2)

Publication Number Publication Date
CN113628129A CN113628129A (en) 2021-11-09
CN113628129B true CN113628129B (en) 2024-03-12

Family

ID=78380056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110812986.2A Active CN113628129B (en) 2021-07-19 2021-07-19 Edge attention single image shadow removing method based on semi-supervised learning

Country Status (1)

Country Link
CN (1) CN113628129B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147412B (en) * 2022-08-31 2022-12-16 武汉大学 Long time sequence network for memory transfer and video shadow detection method
CN115375589B (en) * 2022-10-25 2023-02-10 城云科技(中国)有限公司 Model for removing image shadow and construction method, device and application thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978807A (en) * 2019-04-01 2019-07-05 西北工业大学 A kind of shadow removal method based on production confrontation network
CN110443763A (en) * 2019-08-01 2019-11-12 山东工商学院 A kind of Image shadow removal method based on convolutional neural networks
CN112529789A (en) * 2020-11-13 2021-03-19 北京航空航天大学 Weak supervision method for removing shadow of urban visible light remote sensing image
CN112801107A (en) * 2021-02-01 2021-05-14 联想(北京)有限公司 Image segmentation method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978807A (en) * 2019-04-01 2019-07-05 西北工业大学 A kind of shadow removal method based on production confrontation network
CN110443763A (en) * 2019-08-01 2019-11-12 山东工商学院 A kind of Image shadow removal method based on convolutional neural networks
CN112529789A (en) * 2020-11-13 2021-03-19 北京航空航天大学 Weak supervision method for removing shadow of urban visible light remote sensing image
CN112801107A (en) * 2021-02-01 2021-05-14 联想(北京)有限公司 Image segmentation method and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ARGAN :attentive recurrent generative adversarial network for shadow detection and removel;Bin Ding;arXiv;全文 *
Bin Ding.ARGAN :attentive recurrent generative adversarial network for shadow detection and removel.arXiv.2019,全文. *

Also Published As

Publication number Publication date
CN113628129A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN111062872B (en) Image super-resolution reconstruction method and system based on edge detection
Yeh et al. Multi-scale deep residual learning-based single image haze removal via image decomposition
CN109410239B (en) Text image super-resolution reconstruction method based on condition generation countermeasure network
Li et al. Single image dehazing via conditional generative adversarial network
CN113628129B (en) Edge attention single image shadow removing method based on semi-supervised learning
CN111292265A (en) Image restoration method based on generating type antagonistic neural network
CN111915530A (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN111768340B (en) Super-resolution image reconstruction method and system based on dense multipath network
CN116029902A (en) Knowledge distillation-based unsupervised real world image super-resolution method
CN117274059A (en) Low-resolution image reconstruction method and system based on image coding-decoding
CN115965844B (en) Multi-focus image fusion method based on visual saliency priori knowledge
CN113269167B (en) Face counterfeiting detection method based on image blocking and disordering
CN116823647A (en) Image complement method based on fast Fourier transform and selective attention mechanism
Li et al. Adversarial feature hybrid framework for steganography with shifted window local loss
CN114973364A (en) Depth image false distinguishing method and system based on face region attention mechanism
Tian et al. Perceptive self-supervised learning network for noisy image watermark removal
CN114331894A (en) Face image restoration method based on potential feature reconstruction and mask perception
CN114698398A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
Wu et al. Steganalysis using unsupervised end-to-end CNN fused with residual image
CN117576567B (en) Remote sensing image change detection method using multi-level difference characteristic self-adaptive fusion
CN117252747A (en) Convolutional neural network-based digital watermark image self-supervision black box attack method
CN117830824A (en) Weak supervision remote sensing image semantic change detection algorithm based on contrast learning
Rao et al. IMAGE RESTORATION USING RESIDUAL GENERATIVE ADVERSARIAL NETWORKS
CN117523378A (en) Sea urchin classification detection method using sea urchin EfficientDet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant