CN114742733A - Cloud removing method and device, computer equipment and storage medium - Google Patents

Cloud removing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114742733A
CN114742733A CN202210410940.2A CN202210410940A CN114742733A CN 114742733 A CN114742733 A CN 114742733A CN 202210410940 A CN202210410940 A CN 202210410940A CN 114742733 A CN114742733 A CN 114742733A
Authority
CN
China
Prior art keywords
image
cloud
calculation
feature map
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210410940.2A
Other languages
Chinese (zh)
Inventor
张佳颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210410940.2A priority Critical patent/CN114742733A/en
Publication of CN114742733A publication Critical patent/CN114742733A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a cloud removal method, a cloud removal device, computer equipment, a storage medium and a computer program product, relates to the technical field of artificial intelligence, and can be used in the field of financial science and technology or other related fields. The method comprises the following steps: performing convolution calculation on the acquired image to be processed based on a pre-trained cloud removal model to obtain a plurality of types of feature maps; processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and processing the second type feature map according to a preset dense residual error algorithm to determine a dense residual error; and carrying out fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature, and carrying out image reconstruction on the first fusion feature to obtain a target cloud-free image. By adopting the multi-scale feature extraction network in the method, the local and global features of the image can be effectively extracted, color distortion and blurring in the image restoration process are avoided, the image reconstruction rationality is improved, and a clear and real cloud-free image is generated.

Description

Cloud removal method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a cloud removal method and apparatus, a computer device, and a storage medium.
Background
With the continuous maturity of remote sensing acquisition technology, high-resolution remote sensing images have been widely applied to various earth observation activities, such as climate change evaluation, land cover recognition, crop detection and the like. However, when the surface information is extracted from the remote sensing image, the surface information is very susceptible to cloud layers and the like in the natural environment. In order to improve the effectiveness and the usability of the remote sensing data, accurate identification and removal of cloud cover in the remote sensing data are needed.
In the related art, a method for performing cloud removal processing on a remote sensing image generally includes generating a countermeasure network by utilizing deep learning, and constructing a nonlinear mapping between a cloud image and a non-cloud image through the countermeasure network, so as to achieve the purpose of cloud removal of the cloud image. However, the method of performing the cloud removal processing based on the deep learning model may blur the generated cloud-free image, and thus the cloud removal effect is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a cloud removing method, an apparatus, a computer device, and a storage medium capable of ensuring the sharpness of a cloud-removed image.
In a first aspect, the present application provides a cloud removal method. The method comprises the following steps:
acquiring an image to be processed, wherein the image to be processed is a remote sensing image with cloud cover;
performing first-type convolution calculation on the image to be processed based on the pre-trained cloud removal model to obtain a first-type feature map, and performing second-type convolution calculation on the image to be processed to obtain a second-type feature map;
processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing dense residual calculation on the second type feature map according to a preset dense residual algorithm to obtain a dense residual;
performing fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature;
and reconstructing an image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed.
In one embodiment, the acquiring the image to be processed includes:
acquiring an initial image, identifying a cloud layer area in the initial image, obtaining a first cloud area image and a first non-cloud area image corresponding to the initial image, and taking the first cloud area image as the image to be processed;
the method further comprises the following steps:
and superposing the first cloud-free area image and the target cloud-free image to obtain a cloud-free image corresponding to the initial image.
In one embodiment, the performing a first-type convolution calculation on the image to be processed to obtain a first-type feature map includes:
performing convolution calculation on the image to be processed respectively according to a plurality of different preset scales to obtain initial feature maps of a plurality of different dimensions corresponding to the image to be processed, wherein the dimensions correspond to the preset scales one by one;
performing fusion splicing processing on the two initial feature maps to obtain an updated initial feature map of the image to be processed;
and under the condition that the updated initial feature map does not meet the preset single condition, re-executing the step of performing fusion splicing processing on every two initial feature maps to obtain the updated initial feature map of the image to be processed until the updated initial feature map meets the preset single condition, and taking the initial feature map meeting the preset single condition as the first type feature map.
In one embodiment, the processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map includes:
performing residual error calculation on the first type characteristic diagram to obtain target residual error characteristics;
and according to a preset mixed attention algorithm, performing enhancement processing on the target residual error characteristics to obtain an enhanced characteristic diagram.
In one embodiment, the reconstructing an image according to the first fusion feature to obtain a target cloud-free image corresponding to the image to be processed includes:
performing target times of upsampling processing on the first fusion characteristic to obtain recovered pixel data;
and performing the second type convolution calculation on the recovered pixel data to obtain a target cloud-free image corresponding to the image to be processed.
In one embodiment, the cloud removal model comprises a residual calculation module comprising a fusion unit and a plurality of residual calculation units;
the residual calculation of the first type feature map to obtain the target residual feature comprises:
and performing residual error calculation on the first type characteristic diagram through the fusion unit and the plurality of residual error calculation units in the residual error calculation module to obtain target residual error characteristics, wherein the input end and the output end of each residual error calculation unit are sequentially connected, the output end of the first residual error calculation unit is fused with the output end of a target residual error calculation unit in the plurality of residual error calculation units and then is connected with the input end of the next residual error calculation unit of the target residual error calculation unit, and the output end of the first residual error calculation unit is fused with the output end of the last residual error calculation unit and then is connected with the input end of the fusion unit.
In one embodiment, the preset mixed attention algorithm comprises a preset pooling algorithm and a preset channel algorithm; the enhancing the target residual error feature according to a preset mixed attention algorithm to obtain an enhanced feature map, including:
performing pooling calculation on the target residual error according to a preset pooling algorithm to obtain an initial spatial feature;
according to a preset channel algorithm, channel calculation is carried out on the target residual error to obtain initial channel characteristics;
and determining an enhanced feature map according to the initial spatial feature and the initial channel feature.
In one embodiment, the method further comprises:
acquiring training data, wherein the training data comprises a plurality of groups of image pairs, and the image pairs comprise a sample cloud image and a sample cloud-free image;
inputting the cloud image of the sample to a cloud removal model to be trained to obtain a predicted cloud-free image;
calculating a target loss value through a loss function according to the sample cloud-free image and the predicted cloud-free image;
and updating the network parameters of the cloud removal model to be trained according to the target loss value, and returning to the step of executing the training data acquisition until the target loss value meets the training completion condition to obtain the trained cloud removal model.
In one embodiment, the calculating a target loss value according to the sample cloud-free image and the predicted cloud-free image by a loss function includes:
calculating a forward cloud removal loss value, a reverse cloud loss value, a cyclic consistency loss value and a perception loss value according to the sample cloud-free image and the predicted cloud-free image respectively through a forward cloud removal loss function, a reverse cloud loss function, a cyclic consistency loss function and a perception loss function;
and superposing the forward cloud removal loss value, the reverse cloud loss value, the cycle consistency loss value and the perception loss value to obtain a target loss value.
In a second aspect, the application further provides a cloud removing device. The device comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed is a remote sensing image with cloud cover;
the convolution calculation module is used for performing first type convolution calculation on the image to be processed based on the pre-trained cloud removal model to obtain a first type characteristic diagram, and performing second type convolution calculation on the image to be processed to obtain a second type characteristic diagram;
the enhancement module is used for processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing intensive residual calculation on the second type feature map according to a preset intensive residual algorithm to obtain an intensive residual;
the fusion module is used for carrying out fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature;
and the reconstruction module is used for reconstructing an image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed.
In one embodiment, the acquisition module is specifically configured to:
acquiring an initial image, identifying a cloud layer area in the initial image, obtaining a first cloud area image and a first non-cloud area image corresponding to the initial image, and taking the first cloud area image as the image to be processed;
the device further comprises:
and the superposition module is used for superposing the first cloud-free area image and the target cloud-free image to obtain a cloud-free image corresponding to the initial image.
In one embodiment, the convolution calculation module is specifically configured to:
performing convolution calculation on the image to be processed respectively according to a plurality of different preset scales to obtain a plurality of initial feature maps with different dimensionalities corresponding to the image to be processed, wherein the dimensionalities correspond to the preset scales one by one;
performing fusion splicing processing on the two initial feature maps to obtain an updated initial feature map of the image to be processed;
and under the condition that the updated initial feature map does not meet the preset single condition, re-executing the step of performing fusion splicing processing on every two initial feature maps to obtain the updated initial feature map of the image to be processed until the updated initial feature map meets the preset single condition, and taking the initial feature map meeting the preset single condition as the first type feature map.
In one embodiment, the enhancement module is specifically configured to:
performing residual error calculation on the first type characteristic diagram to obtain target residual error characteristics;
and according to a preset mixed attention algorithm, performing enhancement processing on the target residual error characteristics to obtain an enhanced characteristic diagram.
In one embodiment, the reconstruction module is specifically configured to:
performing target times of upsampling processing on the first fusion characteristic to obtain recovered pixel data;
and performing the second type convolution calculation on the recovered pixel data to obtain a target cloud-free image corresponding to the image to be processed.
In one embodiment, the cloud removal model comprises a residual calculation module comprising a fusion unit and a plurality of residual calculation units;
the enhancement module is specifically configured to:
and performing residual error calculation on the first type characteristic diagram through the fusion unit and the plurality of residual error calculation units in the residual error calculation module to obtain target residual error characteristics, wherein the input end and the output end of each residual error calculation unit are sequentially connected, the output end of the first residual error calculation unit is fused with the output end of a target residual error calculation unit in the plurality of residual error calculation units and then is connected with the input end of the next residual error calculation unit of the target residual error calculation unit, and the output end of the first residual error calculation unit is fused with the output end of the last residual error calculation unit and then is connected with the input end of the fusion unit.
In one embodiment, the preset mixed attention algorithm comprises a preset pooling algorithm and a preset channel algorithm; the enhancement module is specifically configured to:
performing pooling calculation on the target residual error according to a preset pooling algorithm to obtain an initial spatial feature;
according to a preset channel algorithm, channel calculation is carried out on the target residual error to obtain initial channel characteristics;
and determining an enhanced feature map according to the initial spatial feature and the initial channel feature.
In one embodiment, the apparatus further comprises: the training module is used for acquiring training data, wherein the training data comprises a plurality of groups of image pairs, and the image pairs comprise a sample cloud image and a sample cloud-free image; inputting the cloud image of the sample to a cloud removal model to be trained to obtain a predicted cloud-free image; calculating a target loss value through a loss function according to the sample cloud-free image and the predicted cloud-free image; and updating the network parameters of the cloud removal model to be trained according to the target loss value, and returning to the step of executing the training data acquisition until the target loss value meets the training completion condition to obtain the trained cloud removal model.
In one embodiment, the training module is specifically configured to:
calculating a forward cloud removal loss value, a reverse cloud loss value, a cycle consistency loss value and a perception loss value according to the sample cloud-free image and the predicted cloud-free image respectively through a forward cloud removal loss function, a reverse cloud loss function, a cycle consistency loss function and a perception loss function;
and superposing the forward cloud removal loss value, the reverse cloud loss value, the cycle consistency loss value and the perception loss value to obtain a target loss value.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring an image to be processed, wherein the image to be processed is a remote sensing image with cloud cover;
performing first-type convolution calculation on the image to be processed based on the pre-trained cloud removal model to obtain a first-type feature map, and performing second-type convolution calculation on the image to be processed to obtain a second-type feature map;
processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing dense residual calculation on the second type feature map according to a preset dense residual algorithm to obtain a dense residual;
performing fusion processing on the enhanced feature map and the dense residual to obtain a first fusion feature;
and reconstructing an image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an image to be processed, wherein the image to be processed is a remote sensing image with cloud cover;
performing first-type convolution calculation on the image to be processed based on the pre-trained cloud removal model to obtain a first-type feature map, and performing second-type convolution calculation on the image to be processed to obtain a second-type feature map;
processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing dense residual calculation on the second type feature map according to a preset dense residual algorithm to obtain a dense residual;
performing fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature;
and reconstructing an image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring an image to be processed, wherein the image to be processed is a remote sensing image with cloud cover;
performing first-type convolution calculation on the image to be processed based on the pre-trained cloud removal model to obtain a first-type feature map, and performing second-type convolution calculation on the image to be processed to obtain a second-type feature map;
processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing dense residual calculation on the second type feature map according to a preset dense residual algorithm to obtain a dense residual;
performing fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature;
and reconstructing an image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed.
The cloud removing method comprises the steps of collecting a remote sensing image (to-be-processed image) with cloud layer coverage; based on a pre-trained cloud removal model, performing first-type convolution calculation on an image to be processed to obtain a first-type characteristic diagram, and performing second-type convolution calculation on the image to be processed to obtain a second-type characteristic diagram; processing the first type of feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing dense residual calculation on the second type of feature map according to a preset dense residual algorithm to obtain a dense residual; performing fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature; and reconstructing the image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed. By adopting the multi-scale feature extraction network in the method, the local and global features of the image can be effectively extracted, color distortion and blurring in the image restoration process are avoided, the learning capability of the network on the main features of the image is enhanced through the mixed attention module, the reasonability and the authenticity of image reconstruction are improved, and a clear and real cloud-free image can be generated.
Drawings
FIG. 1 is a schematic flow chart diagram of a cloud removal method in one embodiment;
FIG. 2 is a flow diagram illustrating the steps for computing a first type profile in one embodiment;
FIG. 3 is a diagram illustrating a first type of convolution calculation performed in one embodiment;
FIG. 4 is a schematic flow chart of the enhancement step in one embodiment;
FIG. 5 is a diagram illustrating an exemplary structure of a residual calculation module;
FIG. 6 is a diagram illustrating a structure of a residual calculating unit according to an embodiment;
FIG. 7 is a flowchart illustrating the steps of computing an enhanced feature map in one embodiment;
FIG. 8 is a schematic diagram of the structure of a computed enhanced feature map in one embodiment;
FIG. 9 is a diagram illustrating an exemplary configuration of a dense residual calculation unit;
FIG. 10 is a schematic flow chart of the upsampling step performed in one embodiment;
FIG. 11 is a schematic flow chart of the training steps performed in one embodiment;
FIG. 12 is a diagram of a training structure for a cloud removal model in one embodiment;
FIG. 13 is a diagram of a training structure of a cloud removal model in another embodiment;
FIG. 14 is a flowchart illustrating the step of calculating a target loss value in one embodiment;
FIG. 15 is a block diagram showing the structure of a cloud removing apparatus according to one embodiment;
FIG. 16 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
With the continuous maturity of remote sensing acquisition technology, high-resolution remote sensing images have been widely applied to various earth observation activities, such as climate change evaluation, land coverage identification, crop detection and the like. However, when the surface information is extracted from the remote sensing image, the surface information is very easily influenced by the natural environment (such as cloud layer shielding). In order to improve the effectiveness and the usability of the remote sensing data, the key for using the remote sensing data is to accurately identify and remove cloud cover on a remote sensing data image. At present, remote sensing image cloud removing processing methods are roughly divided into four types, each type of method has various defects in application, and the specific analysis is as follows:
1. the space-based cloud removal method comprises the following steps: the method is mainly based on the spatial autocorrelation among local or non-local areas of the cloud pollution image, so that a cloud area is synthesized, and the cloud eliminating effect is achieved. The space-based method does not perform well in the case of a large cloud area or complex texture because of no reference image.
2. Cloud removal method based on spectrum: the method depends heavily on complementary information of the spectral domain, and is based on different cloud penetrating abilities of different wavelengths. However, as the thickness of the cloud layer increases, almost all optical bands are affected by information loss, so that the cloud removing effect is poor, and the method is only suitable for removing the thin cloud.
3. The time-based cloud removal method comprises the following steps: and extracting complementary information in the remote sensing images acquired in the same scene at different time periods. The method is excellent when no obvious building change exists between the cloud image and the cloud-free image, but the method is high in calculation complexity and high in consumption of calculation resources.
4. The method based on deep learning comprises the following steps: by utilizing the generation countermeasure network, the nonlinear mapping between the cloud image and the cloud-free image is constructed, and the aim of effectively removing the cloud is further fulfilled. However, the deep learning model has extremely high requirements on equipment performance and data sets, and is easy to introduce problems of image blurring, style transformation and the like.
The cloud removal method provided by the embodiment of the invention is a technical problem that image blurring is easily introduced after cloud removal processing is performed based on a deep learning method, and provides a fusion algorithm of mixed attention and cycle consistency based on GAN (generic adaptive Networks). By adopting the method and the device, the cloud cover area in the remote sensing image can be effectively removed, and a clear and accurate cloud-free image is generated. In an application scene of a bank organization, the bank organization can acquire remote sensing image data of a target scene to carry out cloud removing processing, and the obtained cloud-free image is applied to a post-credit risk assessment scene of banking business, so that risk assessment can be more accurate.
The learning network in the method provided by the embodiment of the invention fuses the local features extracted by the small-scale convolution and the global features extracted by the large-scale convolution, so that the capability of extracting the depth features by the model corresponding to the learning network can be improved. Meanwhile, in order to enable the extracted features to be more complete, the space-time features can be weighted by using a mixed attention module. And finally, reconstructing a clear image subjected to removal processing through an up-sampling operation. The method provided by the embodiment of the invention sufficiently integrates multi-scale features, attention mechanism and cycle consistency, effectively removes the interference of cloud layers in the remote sensing image, and obviously improves the reasonability and definition of image reconstruction.
In an embodiment, as shown in fig. 1, a cloud removing method is provided, and this embodiment is exemplified by applying the method to a terminal, it can be understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and is implemented by interaction between the terminal and the server, where the terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and internet of things devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted devices, and the like. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers. In this embodiment, the cloud removal method includes the following steps:
and 102, acquiring an image to be processed.
The image to be processed is a remote sensing image with cloud cover.
Specifically, the terminal can collect a hyperspectral remote sensing image covered by a cloud layer in any scene, and the hyperspectral remote sensing image is used as an image to be processed.
And 104, performing first-type convolution calculation on the image to be processed based on the pre-trained cloud removal model to obtain a first-type characteristic diagram, and performing second-type convolution calculation on the image to be processed to obtain a second-type characteristic diagram.
Specifically, the image to be processed is input to a pre-trained cloud removal model. And the pre-trained cloud removal model is used for removing cloud layers in the remote sensing image covered by the cloud layers, and carrying out cloud removal processing on the cloud images to obtain cloud-free images. The pre-trained cloud removal model may include a feature extraction module, an enhancement module, a dense residual calculation module, an image reconstruction module, and the enhancement module may include a residual calculation module and a mixed attention module.
In particular, in a pre-trained cloud removal model, the first type of convolution calculations differ from the second type of convolution calculations in that different scales of convolution sizes may be employed for the calculations. The terminal carries out first-type convolution calculation on the image to be processed through the feature extraction module to obtain a first-type feature map; similarly, a second type convolution calculation is carried out on the image to be processed to obtain a second type characteristic diagram. In this way, the terminal can extract the features of the images to be processed with different dimensions, including global features and local features, according to the convolution calculation with different scales.
Optionally, the scale of convolution characterizes the size of a convolution kernel, the convolution computation of which is smaller (for example, convolution kernels 1 × 1 and 3 × 3), and the extracted feature of the image to be processed is a local feature; convolution calculation with large convolution kernel (for example, convolution kernel 7 × 7), the extracted features of the image to be processed are global features. And the terminal respectively extracts the features according to different scales, and the features obtained by fusing the extracted multiple features are fusion features which comprise detail information and overall information of the image to be processed.
And 106, processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing dense residual calculation on the second type feature map according to a preset dense residual algorithm to obtain a dense residual.
Specifically, the terminal performs residual calculation on the first type feature map through a residual calculation module in the enhancement module to obtain a target residual feature, so that the terminal can perform enhancement processing on the target residual feature through the mixed attention module according to a preset mixed attention algorithm to obtain an enhanced feature map; the terminal can also perform dense residual calculation on the second type characteristic diagram through a dense residual calculation module to obtain dense residuals.
And step 118, performing fusion processing on the enhanced feature map and the dense residual to obtain a first fusion feature.
Specifically, the enhanced feature map may be a feature vector, and the dense residual may also be a feature vector, and a feature dimension of the enhanced feature map is different from a feature dimension of the dense residual. And in the process of fusion processing by the terminal, splicing the enhanced feature graph and the dense residual error, and taking the spliced feature graph as a first fusion feature.
For example, the enhanced feature map may be a-dimensional feature vector, the dense residual may be a b-dimensional feature vector, and the process of fusing the terminal may be to directly splice the a-dimensional feature vector and the b-dimensional feature vector to obtain a (a + b) -dimensional feature vector.
And step 110, reconstructing an image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed.
Specifically, the terminal may perform upsampling processing on the first fusion feature through the image reconstruction module, and perform pixel data recovery on an image corresponding to the first fusion feature to obtain a target cloud-free image corresponding to the image to be processed.
In the cloud removing method, a remote sensing image (to-be-processed image) with cloud layer coverage is obtained; based on a pre-trained cloud removal model, performing first type convolution calculation on an image to be processed to obtain a first type characteristic diagram, and performing second type convolution calculation on the image to be processed to obtain a second type characteristic diagram; processing the first type of feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing dense residual calculation on the second type of feature map according to a preset dense residual algorithm to obtain a dense residual; performing fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature; and reconstructing the image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed. By adopting the multi-scale feature extraction network in the method, the local and global features of the image can be effectively extracted, color distortion and blurring in the image restoration process are avoided, the learning capability of the network on the main features of the image is enhanced through the mixed attention module, the reasonability and the authenticity of image reconstruction are improved, and a clear and real cloud-free image can be generated.
In one embodiment, the specific implementation of step 102 "acquiring an image to be processed" includes:
the method comprises the steps of collecting an initial image, identifying a cloud layer area in the initial image, obtaining a first cloud area image and a first non-cloud area image corresponding to the initial image, and taking the first cloud area image as an image to be processed.
The terminal obtains an initial image of an actual scene through the picture acquisition device, wherein the actual scene can be a land containing a building, a farmland and the like, and the initial image can be a hyperspectral remote sensing image containing cloud cover of the actual scene.
Specifically, the terminal can identify a cloud layer area in the initial image through a preset cloud identification model, mark the cloud layer area through a candidate frame to obtain a first cloud area image serving as a to-be-processed image, and take an image of an area not marked by the candidate frame as a first non-cloud area image.
Correspondingly, the cloud removal method further comprises the following steps:
and superposing the first cloud-free area image and the target cloud-free image to obtain a cloud-free image corresponding to the initial image.
Specifically, the terminal eliminates the cloud layer region in the image to be processed by the cloud removing method described in the embodiment, and obtains a cloud-free image (target cloud-free image) corresponding to the image to be processed, and then splices the target cloud-free image and the first cloud-free region image, and obtains an image after the cloud layer region corresponding to the initial image is eliminated.
In one embodiment, as shown in fig. 2, the specific processing procedure of "performing a first-type convolution calculation on the image to be processed to obtain a first-type feature map" in step 106 includes:
step 202, performing convolution calculation on the image to be processed respectively according to a plurality of different preset scales to obtain a plurality of initial feature maps with different dimensions corresponding to the image to be processed.
Wherein, the dimension corresponds to the preset dimension one by one.
Specifically, the preset scale represents a convolution kernel size of the convolution calculation. The terminal can determine a plurality of preset scales with different sizes according to the actual scene. And performing convolution calculation on the image to be processed according to a plurality of different preset scales. The specific calculation process may be that convolution calculations with different convolution kernel sizes are performed on the image to be processed, so as to obtain initial feature maps of the processed image with multiple dimensions.
And 204, carrying out fusion splicing treatment on the two initial characteristic graphs to obtain the updated initial characteristic graph of the image to be processed.
Specifically, the terminal determines the mode of performing fusion splicing processing according to the number of the initial feature maps. The step of performing fusion splicing processing (Concat) on the initial feature map by the terminal is to splice (channel cascade) the initial feature map directly according to the channel dimension so as to increase the feature number of the updated initial feature map.
Optionally, channel concatenation refers to stitching two images with 3 dimensions of width (w), height (h) and channel (c) according to the channel dimension.
In one example, if the number of the initial feature maps is an even number, the terminal performs a fusion splicing process on every two initial feature maps to obtain at least one updated initial feature map.
In another example, if the number of the initial feature maps is an odd number, the terminal may sequentially perform fusion splicing processing (channel cascade) on the ith initial feature map and the (i + 1) th initial feature map according to the arrangement order to obtain at least one updated initial feature map, where i is greater than 0, and i is smaller than a quantity value of a preset scale.
And step 206, under the condition that the updated initial feature map does not meet the preset single condition, performing fusion splicing processing on every two initial feature maps again to obtain the updated initial feature map of the image to be processed until the updated initial feature map meets the preset single condition, and taking the initial feature map meeting the preset single condition as the first type feature map.
Specifically, the terminal determines according to the number of the updated initial feature maps, and if the number of the updated initial feature maps does not satisfy the preset single condition, that is, there are multiple updated initial feature maps, the terminal repeatedly executes the process of step 204 to perform the fusion splicing process on the multiple updated initial feature maps again. If the terminal determines that the number of the updated initial feature maps meets the preset single condition, that is, only one initial feature map exists after updating, the terminal may use the initial feature map meeting the preset single condition as the first type feature map.
In one possible implementation, the plurality of different preset scales may include a first scale, a second scale, and a third scale, the first scale being smaller than the second scale, and the second scale being smaller than the third scale.
Specifically, the terminal may perform convolution calculation of a first scale on an image to be processed, and determine an initial feature map of a first dimension of the image to be processed; and performing convolution calculation of a second scale on the image to be processed, and determining an initial characteristic diagram of a second dimension of the image to be processed. And performing convolution calculation of a third dimension on the image to be processed, and determining an initial feature map of the third dimension of the image to be processed.
Thus, the terminal determines that the number of the initial feature maps is odd, and then the terminal performs fusion splicing processing on the plurality of initial feature maps, wherein the specific fusion process may be: and the terminal sequentially performs fusion splicing processing (channel cascade) on the ith initial feature map and the (i + 1) th initial feature map according to the arrangement sequence to obtain at least one initial feature map, wherein i is larger than 0 and is smaller than the quantity value of a preset scale. For example, the initial feature map of the first dimension and the initial feature map of the second dimension are subjected to fusion splicing processing to obtain an initial feature map of a fourth dimension. And the terminal performs fusion splicing processing on the initial feature map of the second dimension and the initial feature map of the third dimension to obtain an initial feature map of a fifth dimension. Thus, the terminal determines that the updated initial feature map of the image to be processed includes: an initial feature map of a fourth dimension and an initial feature map of a fifth dimension.
Thus, the terminal determines that the number of the initial feature maps of the fourth dimension and the number of the initial feature maps of the fifth dimension are 2, and the number does not meet the preset single condition, at this time, step 204 is executed again, and the fusion splicing processing is performed on the initial feature maps of the fourth dimension and the initial feature maps of the fifth dimension. Thus, the terminal determines that the updated initial feature map of the image to be processed includes: initial feature map of sixth dimension. At this time, if the number of the updated initial feature maps meets a preset single condition, the terminal may use the initial feature map of the sixth dimension as the first type feature map.
Alternatively, as shown in fig. 3, a first scale may represent the convolution calculation of a 1 × 1 convolution kernel, a second scale may represent the convolution calculation of a 3 × 3 convolution kernel, and a third scale may represent the convolution calculation of a 7 × 7 convolution kernel. The terminal can respectively carry out convolution calculation on the image to be processed through a 1 × 1 convolution kernel and a 3 × 3 convolution kernel so as to extract the detail characteristics of the image to be processed; in this way, the terminal can simultaneously perform convolution calculation on the image to be processed through the 7 × 7 convolution kernel, and extract the overall features of the image to be processed. When the terminal performs the convolution calculation through the convolution kernel, in order to ensure the integrity of the features of the image to be processed, the step size of the convolution calculation may be set to 1.
Thus, structurally, the terminal can fuse the initial feature maps obtained by convolution of 1 × 1 volume and 3 × 3 volume to obtain fused features, increase the number of features, and fuse the features obtained by convolution of 3 × 3 volume and 7 × 7 volume to obtain fused features; and finally, combining the two fusion characteristics to obtain a first type characteristic diagram.
In the embodiment, by carrying out multi-scale feature extraction on the image to be processed, local and global features can be taken into consideration, and the extraction of details and overall information of the image is facilitated.
In one embodiment, as shown in fig. 4, the step 108 "processing the first type feature map according to a preset hybrid attention algorithm to obtain a specific processing procedure of the enhanced feature map", includes:
and 302, performing residual error calculation on the first type characteristic diagram to obtain target residual error characteristics.
Specifically, the terminal may perform residual calculation on the first type feature map through a preset residual algorithm to obtain a target residual feature corresponding to the first type feature map.
And 304, enhancing the target residual error characteristics according to a preset mixed attention algorithm to obtain an enhanced characteristic diagram.
In the embodiment, the characteristics of the image to be processed can be obtained more deeply by performing residual calculation on the first type characteristic diagram, and the consumption of calculation resources is not increased; by enhancing the target residual error characteristics, the learning capability of the model for the main characteristics of the image can be increased, and the problems of image blurring and image aliasing are avoided.
In one embodiment, the cloud removal model includes a residual calculation module including a fusion unit and a plurality of residual calculation units.
Correspondingly, the specific processing procedure of "performing residual calculation on the first type feature map to obtain the target residual feature" in step 302 includes:
and performing residual calculation on the first type characteristic diagram through a fusion unit and a plurality of residual calculation units included in the residual calculation module to obtain the target residual characteristic. The input end and the output end of each residual error calculation unit are sequentially connected, the output end of the first residual error calculation unit is fused with the output end of a target residual error calculation unit in the plurality of residual error calculation units and then connected with the input end of the next residual error calculation unit of the target residual error calculation unit, and the output end of the first residual error calculation unit is fused with the output end of the last residual error calculation unit and then connected with the input end of the fusion unit.
In one example, as shown in fig. 5, seven residual calculation units may be included in the residual calculation module, an input end and an output end of each residual calculation unit are sequentially connected, and a fourth residual calculation unit may be a target residual calculation unit. In this way, the output end of the first residual calculation unit is fused with the output end of the fourth residual calculation unit and then connected with the input end of the fifth residual calculation unit, and the output end of the first residual calculation unit is fused with the output end of the seventh residual calculation unit and then connected with the input end of the fusion unit.
Specifically, the terminal inputs the first type feature map into a residual calculation module, and in the residual calculation module, the first type feature map is sequentially operated by the residual calculation unit and calculated by the fusion unit to obtain the target residual feature.
In a possible implementation manner, for each residual calculation unit, the terminal performs residual calculation on the input data of each residual calculation unit, where the residual calculation includes convolution calculation of a preset scale and activation operation performed according to a preset activation function.
Alternatively, the convolution calculation of the preset scale may be a convolution calculation of a 3 × 3 convolution kernel, and the preset activation function may be a ReLU function.
In another possible implementation manner, as shown in fig. 6, the residual calculation unit includes a plurality of residual calculation sub-units and a fusion sub-unit, an input end of each residual calculation sub-unit is sequentially connected to an output end of the next residual calculation sub-unit, an output end of the last residual calculation sub-unit is fused with an input end of the first residual calculation sub-unit, and then is connected to an input end of the fusion sub-unit, where the calculation performed in each residual calculation sub-unit includes convolution calculation of a preset scale and activation operation performed according to a preset activation function.
Alternatively, the residual calculation unit may include 3 residual calculation sub-units, the convolution calculation of the preset scale may be a convolution calculation of a 3 × 3 convolution kernel, and the preset activation function may be a ReLU function.
The first residual calculation unit is taken as an example for explanation: the terminal inputs the first type characteristic diagram into a first residual error calculation unit, wherein the terminal performs convolution calculation of a preset scale on the first type characteristic diagram through 3 residual error calculation subunit to obtain a first convolution calculation result, and performs activation operation on the convolution calculation result to obtain a first activation result; in this way, the terminal performs convolution calculation of a preset scale on the first activation result through the second residual calculation subunit to obtain a second convolution calculation result, and performs activation operation on the second convolution calculation result to obtain a second activation result; in this way, the terminal performs convolution calculation of a preset scale on the second activation result through the third residual calculation subunit to obtain a third convolution calculation result, and performs activation operation on the third convolution calculation result to obtain a third activation result. In this way, the terminal can perform fusion splicing processing on the third activation result and the first type characteristic diagram through the fusion subunit to obtain output data of the first residual error calculation unit.
In this way, the terminal can input the output data of the first residual error calculation unit to the second residual error calculation unit, and the terminal inputs the output data of the first residual error calculation unit and the output data of the fourth residual error calculation unit to the fifth residual error calculation unit after fusing the output data of the first residual error calculation unit and the output data of the fourth residual error calculation unit; and the terminal fuses the output data of the first residual calculation unit and the output data of the last residual calculation unit and inputs the fused data into the fusion unit.
In one embodiment, the preset hybrid attention algorithm includes a preset pooling algorithm and a preset channel algorithm. Correspondingly, as shown in fig. 7, the specific processing procedure of step 304 "performing enhancement processing on the target residual features according to a preset mixed attention algorithm to obtain an enhanced feature map" includes:
step 502, performing pooling calculation on the target residual error characteristics according to a preset pooling algorithm to obtain initial spatial characteristics.
Specifically, the preset pooling algorithm includes a maximum pooling algorithm and an average pooling algorithm. And the terminal performs maximum pooling calculation and average pooling calculation on the target residual error characteristics through a maximum pooling algorithm and an average pooling algorithm respectively to obtain maximum pooling characteristics and average pooling characteristics. Therefore, the terminal can perform convolution calculation and fusion processing on the maximum pooling characteristic and the average pooling characteristic to obtain the initial spatial characteristic.
And step 504, performing channel calculation on the initial spatial features according to a preset channel algorithm to obtain initial channel features.
Specifically, the preset channel algorithm includes a maximum channel algorithm and an average channel algorithm. And the terminal performs maximum channel calculation and average channel calculation on the initial spatial features through a maximum channel algorithm and an average channel algorithm respectively to obtain maximum channel features and average channel features. Therefore, the terminal can perform fusion processing and convolution calculation on the maximum channel characteristic and the average channel characteristic to obtain the initial channel characteristic.
Step 506, determining an enhanced feature map according to the initial spatial features and the initial channel features.
Specifically, the terminal performs product operation on the initial spatial feature and the initial channel feature to obtain an enhanced feature map corresponding to the image to be processed.
In one example, the preset hybrid attention algorithm includes a preset pooling algorithm and a preset channel algorithm. Correspondingly, the specific processing procedure of step 304 "performing enhancement processing on the target residual features according to a preset mixed attention algorithm to obtain an enhanced feature map" includes:
and performing maximum pooling calculation and convolution calculation on the target residual error characteristic according to a preset maximum pooling algorithm to obtain a first space characteristic. And according to a preset average pooling algorithm, performing average pooling calculation and convolution calculation on the target residual error characteristics to obtain second spatial characteristics. And processing the target residual error feature, the first spatial feature and the second spatial feature to obtain an initial spatial feature.
And performing maximum channel calculation on the target space characteristics according to a preset maximum channel algorithm to obtain first channel characteristics. And according to a preset average channel algorithm, carrying out average channel calculation on the target space characteristics to obtain second channel characteristics. And processing the first channel characteristic and the second channel characteristic to obtain an initial channel characteristic.
And determining an enhanced feature map according to the initial spatial features and the initial channel features.
In a specific example, as shown in fig. 8, a specific processing procedure of "performing pooling calculation on the target residual features according to a preset pooling algorithm to obtain initial spatial features" in step 502 includes:
specifically, the preset pooling algorithm includes a maximum pooling algorithm and an average pooling algorithm. And the terminal performs maximum pooling calculation and average pooling calculation on the target residual error characteristics through a maximum pooling algorithm and an average pooling algorithm respectively to obtain maximum pooling characteristics and average pooling characteristics.
In this way, the terminal respectively performs convolution calculation with two layers of convolution kernels 1 × 1 on the maximum pooling characteristic and the average pooling characteristic, specifically, performs convolution calculation with convolution kernel 1 × 1 on the maximum pooling characteristic to obtain a first pooling convolution result, and performs convolution calculation with convolution kernel 1 × 1 on the first pooling convolution result to obtain a second pooling convolution result; and performing convolution calculation with convolution kernel of 1 × 1 on the average pooling characteristic to obtain a third pooling convolution result, and performing convolution calculation with convolution kernel of 1 × 1 on the third pooling convolution result by the terminal to obtain a fourth pooling convolution result. And the terminal performs fusion splicing processing on the second pooling convolution result and the fourth pooling convolution result to obtain a fifth pooling convolution result. And the terminal performs product operation on the fifth pooling convolution result and the target residual error characteristic to obtain an initial space characteristic.
Step 504, "according to the preset channel algorithm, channel calculation is performed on the initial spatial feature to obtain the initial channel feature" may be performed in the following specific processing procedures: the preset channel algorithm includes a maximum channel algorithm and an average channel algorithm. And the terminal respectively performs maximum channel calculation and average channel calculation on the initial spatial features through a maximum channel algorithm and an average channel algorithm to obtain maximum channel features and average channel features.
In this way, the terminal performs fusion splicing processing on the maximum channel feature and the average channel feature to obtain a channel fusion result, and in this way, the terminal performs convolution calculation with a convolution kernel of 1 × 1 on the channel fusion result to obtain a channel convolution result, and the terminal takes the channel convolution result as an initial channel feature.
And the terminal performs product operation on the initial spatial feature and the initial channel feature to obtain an enhanced feature map corresponding to the image to be processed.
In one embodiment, the specific implementation process of step 108 "performing dense residual calculation on the second type feature map according to a preset dense residual algorithm to obtain a dense residual", includes:
the preset dense residual algorithm is calculated through a dense residual module, the dense residual module comprises a plurality of dense residual calculating units which are connected in series, wherein the input end and the output end of each dense residual calculating unit are sequentially connected, namely, the terminal outputs the calculation result of the ith dense residual calculating unit to the next dense residual calculating unit connected with the ith dense residual calculating unit.
Optionally, the dense residual module may include three dense residual calculation units, an output end of the first dense residual calculation unit is connected to an input end of the second dense residual calculation unit, an output end of the second dense residual calculation unit is connected to an input end of the third dense residual calculation unit, an input of the first dense residual calculation unit is a second type feature map, the second type feature map may be obtained by performing convolution calculation of 3 × 3 on the image to be processed, and an output of the third dense residual calculation unit is a dense residual.
In a specific example, the dense residual calculation unit includes a dense residual fusion subunit and a plurality of dense residual calculation subunits, for example, as shown in fig. 9, which is a schematic structural diagram of a dense residual calculation unit that includes four dense residual calculation subunits and a dense residual fusion subunit.
In this way, the first dense residual calculation unit is used for example, the input of the first dense residual calculation unit is the second type feature map, and the terminal inputs the second type feature map into each dense residual calculation subunit respectively. And the terminal performs 3 × 3 convolution calculation on the second type features through the first dense residual calculation subunit to obtain a first dense residual calculation result. And the terminal inputs the calculation result of the first dense residual into a second dense residual calculation subunit, a third dense residual calculation subunit and a fourth dense residual calculation subunit. And the terminal performs 3-by-3 convolution calculation after performing fusion processing on the first dense residual calculation result and the second type characteristic graph through a second dense residual calculation subunit to obtain a second dense residual calculation result. And the terminal inputs the second dense residual calculation result into the third dense residual calculation subunit and the fourth dense residual calculation subunit. And the terminal performs 3-by-3 convolution calculation after performing fusion processing on the first dense residual calculation result, the second dense residual calculation result and the second type characteristic graph through a third dense residual calculation subunit to obtain a third dense residual calculation result. And the terminal performs 3 × 3 convolution calculation after performing fusion processing on the first dense residual calculation result, the second dense residual calculation result, the third dense residual calculation result and the second type feature map through a fourth dense residual calculation subunit to obtain a fourth dense residual calculation result. And the terminal performs fusion splicing on the fourth dense residual calculation result and the second type characteristic diagram to obtain the output of the first dense residual calculation unit.
In an embodiment, as shown in fig. 10, a specific processing procedure of "performing image reconstruction according to the first fusion feature to obtain a target cloud-free image corresponding to the image to be processed" in step 112 includes:
and 402, performing target times of upsampling processing on the first fusion feature to obtain recovered pixel data.
Specifically, the target number of times may be 2 times, and the upsampling process may be performed by an upsampling module.
And 404, performing second-type convolution calculation on the recovered pixel data to obtain a target cloud-free image corresponding to the image to be processed.
In the embodiment, image reconstruction and pixel recovery are realized by two times of upsampling, and the definition of image recovery is ensured.
In one embodiment, as shown in fig. 11, the cloud removing method further includes:
step 602, training data is obtained.
Wherein the training data comprises a plurality of sets of image pairs, the image pairs comprising a sample cloudy image and a sample non-cloudy image.
Specifically, the terminal collects a plurality of cloud high-spectrum remote sensing images and cloud-free high-spectrum remote sensing images in the same scene, random rotation processing and cutting processing are carried out on the plurality of cloud high-spectrum remote sensing images and cloud-free high-spectrum remote sensing images, and data expansion of the images is achieved.
Optionally, the terminal determines a random rotation angle, then rotates the cloud high-spectrum remote sensing image by the random rotation angle to obtain a rotated image, and takes the rotated image as a sample cloud image; correspondingly, carrying out similar processing on the cloud-free high-spectrum remote sensing image to obtain a sample cloud-free image.
Optionally, the terminal cuts the multiple cloud high-spectrum remote sensing images and the multiple cloud-free high-spectrum remote sensing images into sub-pictures with the same size, and the sub-pictures are used as the sample cloud images and the sample cloud-free images.
And step 604, inputting the cloud images of the samples into a cloud removal model to be trained to obtain a predicted cloud-free image.
And 606, calculating a target loss value through a loss function according to the sample cloud-free image and the predicted cloud-free image.
And 608, updating the network parameters of the cloud removal model to be trained according to the target loss value, and returning to the step of acquiring the training data until the target loss value meets the training completion condition to obtain the trained cloud removal model.
Specifically, the training completion condition may be that the loss function corresponding to the target loss value has converged, or that the number of iterations of the training data has reached the target number, or the like. For example, the target number may be 100 times, 300 times, and the like, and the target number is not particularly limited in the embodiment of the present invention.
In an example, as shown in fig. 12, the training structure diagram of the cloud removal model is shown, in which the terminal inputs a cloud image X of a sample into the cloud removal model G (generation network) to obtain a predicted non-cloud image G (X), the terminal may input the predicted non-cloud image G (X) and a non-cloud image X 'of the sample into the discrimination network D, the discrimination network D may perform a judgment according to the predicted non-cloud image G (X) and the non-cloud image X', and output a judgment result of the predicted non-cloud image G (X), which may include TRUE (TRUE) and FALSE (FALSE).
In one example, as shown in fig. 13, the terminal may input the cloud image X of the sample to the cloud removal model, and detect the maximum area S covered by the cloud, so that the cloud area partial model recovery G and the non-cloud area original image replacement G (S) may be obtained. In this way, the terminal can obtain a predicted image X' (a sample cloud-free image) by splicing the two pictures.
Specifically, the terminal firstly cuts the remote sensing image obtained in the actual scene (the size is suitable for the network), and inputs the cut cloud image into the network; the trained network can automatically identify cloud layer coverage areas and select the cloud layer areas by using candidate frames; by generating the network, the cloud areas in the candidate frames can be automatically filled according to the knowledge learned by the generated network, and the original state of the cloud-free areas is kept unchanged; and finally, outputting a clear image without cloud coverage so as to solve the post-loan risk assessment of banking business and assess the property condition of a borrower.
In one embodiment, as shown in fig. 14, the specific implementation of step 606 "calculating a target loss value by a loss function according to a sample cloud-free image and a predicted cloud-free image" includes:
step 702, calculating a forward cloud removal loss value, a reverse cloud loss value, a cycle consistency loss value and a perception loss value according to the sample cloud-free image and the predicted cloud-free image respectively through a forward cloud removal loss function, a reverse cloud loss function, a cycle consistency loss function and a perception loss function.
And 704, overlapping the forward cloud removal loss value, the reverse cloud loss value, the cycle consistency loss value and the perception loss value to obtain a target loss value.
Specifically, the terminal may calculate the target loss value L (G, F) by the following formula:
L(G,F)=LG+LFcycLcycpLP
wherein G () represents a forward cloud generator, i.e., a predicted non-cloud image corresponding to the sample cloud image is generated, F () represents a reverse cloud generator, LGRepresents the forward cloud loss value, LFDenotes the reverse clouding loss value, λcycWeight, L, corresponding to a value representing a loss of cyclic consistencycycRepresents a cyclic consistency loss value, λpRepresents the weight of the perceptual loss value, Lp represents the perceptual loss value.
The terminal can calculate the forward cloud loss value L through the following formulaG
Figure BDA0003604091820000221
Where x represents a cloud image of the sample, y represents a cloud-free image of the sample, DY() Representing the result of discrimination for a non-cloud image, log () representing a logarithmic operation to make the result of discrimination more prominent, PdataRepresenting the distribution of data.
The terminal can calculate the reverse clouding loss value L through the following formulaF
Figure BDA0003604091820000222
Wherein D isX() A determination result indicating that there is a cloud image; f () represents that a cloud image is generated;
the terminal can calculate the cycle consistency loss value L by the following formulacyc
Figure BDA0003604091820000223
Wherein | | | purple hair1Represents the 1-norm, the sum of the absolute values of the vector elements;
the terminal can calculate the perceptual loss value L by the following formulap
Figure BDA0003604091820000224
Wherein, CiIs the length weight, HiIs a height weight, WiIs the weight of the channel(s),
Figure BDA0003604091820000225
Figure BDA0003604091820000226
representing a loss of style of the dedusted image;
Figure BDA0003604091820000227
representing a loss of style of the clouded image.
In one embodiment, inputting the cloud image of the sample to a cloud removal model to be trained to obtain a predicted cloud-free image includes:
and respectively carrying out first type convolution calculation and second type convolution calculation on the cloud image of the sample to obtain a first type sample characteristic diagram and a second type sample characteristic diagram.
And performing residual error calculation on the first type of sample characteristic diagram to obtain a sample target residual error characteristic.
And carrying out weighting processing on the sample target residual error characteristics according to a preset mixed attention algorithm to obtain a sample enhanced characteristic diagram.
And carrying out dense residual calculation on the second type sample characteristic graph according to a preset dense residual algorithm to obtain a sample dense residual.
And carrying out fusion processing on the sample enhanced feature map and the sample dense residual error to obtain a first sample fusion feature.
And performing target times of upsampling processing on the first sample fusion characteristic to obtain sample recovery pixel data.
And performing second-type convolution calculation on the sample recovery pixel data to obtain a predicted non-cloud image corresponding to the sample cloud image.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a cloud removing device for realizing the cloud removing method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the cloud removal device provided below may refer to the above limitations on the cloud removal method, and details are not described here.
In one embodiment, as shown in fig. 15, there is provided a cloud removing apparatus 800 including:
the acquisition module 801 is used for acquiring an image to be processed, wherein the image to be processed is a remote sensing image with cloud cover.
The convolution calculation module 802 is configured to perform first-type convolution calculation and second-type convolution calculation on the image to be processed, respectively, to obtain a first-type feature map and a second-type feature map.
The enhancing module 803 is configured to process the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map;
and the dense residual calculation module 804 is configured to perform dense residual calculation on the second type feature map according to a preset dense residual algorithm to obtain a dense residual.
And a fusion module 805 configured to perform fusion processing on the enhanced feature map and the dense residual to obtain a first fusion feature.
And a reconstructing module 806, configured to perform image reconstruction according to the first fusion feature to obtain a target cloud-free image corresponding to the image to be processed.
In one embodiment, the acquisition module is specifically configured to:
acquiring an initial image, identifying a cloud layer area in the initial image, obtaining a first cloud area image and a first non-cloud area image corresponding to the initial image, and taking the first cloud area image as the image to be processed;
the device further comprises:
and the superposition module is used for superposing the first cloud-free area image and the target cloud-free image to obtain a cloud-free image corresponding to the initial image.
In one embodiment, the convolution calculation module is specifically configured to:
performing convolution calculation on the image to be processed respectively according to a plurality of different preset scales to obtain a plurality of initial feature maps with different dimensionalities corresponding to the image to be processed, wherein the dimensionalities correspond to the preset scales one by one;
performing fusion splicing processing on every two initial feature maps to obtain an updated initial feature map of the image to be processed;
and under the condition that the updated initial feature map does not meet the preset single condition, re-executing the step of performing fusion splicing processing on every two initial feature maps to obtain the updated initial feature map of the image to be processed until the updated initial feature map meets the preset single condition, and taking the initial feature map meeting the preset single condition as the first type feature map.
In one embodiment, the enhancement module is specifically configured to:
performing residual error calculation on the first type characteristic diagram to obtain target residual error characteristics;
and according to a preset mixed attention algorithm, performing enhancement processing on the target residual error characteristics to obtain an enhanced characteristic diagram.
In one embodiment, the reconstruction module is specifically configured to:
performing target times of upsampling processing on the first fusion characteristic to obtain recovered pixel data;
and performing the second type convolution calculation on the recovered pixel data to obtain a target cloud-free image corresponding to the image to be processed.
In one embodiment, the cloud removal model comprises a residual calculation module comprising a fusion unit and a plurality of residual calculation units;
the enhancement module is specifically configured to:
and performing residual calculation on the first type characteristic diagram through the fusion unit and a plurality of residual calculation units included in the residual calculation module to obtain target residual characteristics, wherein the input end and the output end of each residual calculation unit are sequentially connected, the output end of the first residual calculation unit is fused with the output end of a target residual calculation unit in the plurality of residual calculation units and then connected with the input end of the next residual calculation unit in the target residual calculation unit, and the output end of the first residual calculation unit is fused with the output end of the last residual calculation unit and then connected with the input end of the fusion unit.
In one embodiment, the preset mixed attention algorithm comprises a preset pooling algorithm and a preset channel algorithm; the enhancement module is specifically configured to:
performing pooling calculation on the target residual error according to a preset pooling algorithm to obtain an initial spatial feature;
according to a preset channel algorithm, channel calculation is carried out on the target residual error to obtain initial channel characteristics;
and determining an enhanced feature map according to the initial spatial feature and the initial channel feature.
In one embodiment, the apparatus further comprises: the training module is used for acquiring training data, wherein the training data comprises a plurality of groups of image pairs, and the image pairs comprise a sample cloud image and a sample cloud-free image; inputting the cloud image of the sample to a cloud removal model to be trained to obtain a predicted cloud-free image; calculating a target loss value through a loss function according to the sample cloud-free image and the predicted cloud-free image; and updating the network parameters of the cloud removal model to be trained according to the target loss value, and returning to the step of executing the training data acquisition until the target loss value meets the training completion condition to obtain the trained cloud removal model.
In one embodiment, the training module is specifically configured to:
calculating a forward cloud removal loss value, a reverse cloud loss value, a cycle consistency loss value and a perception loss value according to the sample cloud-free image and the predicted cloud-free image respectively through a forward cloud removal loss function, a reverse cloud loss function, a cycle consistency loss function and a perception loss function;
and superposing the forward cloud removal loss value, the reverse cloud loss value, the cycle consistency loss value and the perception loss value to obtain a target loss value.
The modules in the cloud removing apparatus 800 may be implemented in whole or in part by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 16. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing cloud removal related data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a cloud removal method.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It should be noted that the method and apparatus in the embodiments of the present disclosure may be used in the technical field of artificial intelligence, and may be used in the field of financial technology or other related fields.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (13)

1. A cloud removal method, the method comprising:
acquiring an image to be processed, wherein the image to be processed is a remote sensing image with cloud cover;
performing first-type convolution calculation on the image to be processed based on the pre-trained cloud removal model to obtain a first-type feature map, and performing second-type convolution calculation on the image to be processed to obtain a second-type feature map;
processing the first type feature map according to a preset mixed attention algorithm to obtain an enhanced feature map, and performing dense residual calculation on the second type feature map according to a preset dense residual algorithm to obtain a dense residual;
performing fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature;
and reconstructing an image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed.
2. The method of claim 1, wherein the acquiring the image to be processed comprises:
acquiring an initial image, identifying a cloud layer area in the initial image, obtaining a first cloud area image and a first non-cloud area image corresponding to the initial image, and taking the first cloud area image as the image to be processed;
the method further comprises the following steps:
and superposing the first cloud-free area image and the target cloud-free image to obtain a cloud-free image corresponding to the initial image.
3. The method according to claim 1, wherein said performing a first type convolution calculation on the image to be processed to obtain a first type feature map comprises:
performing convolution calculation on the image to be processed respectively according to a plurality of different preset scales to obtain a plurality of initial feature maps with different dimensionalities corresponding to the image to be processed, wherein the dimensionalities correspond to the preset scales one by one;
performing fusion splicing processing on the two initial feature maps to obtain an updated initial feature map of the image to be processed;
and under the condition that the updated initial feature map does not meet the preset single condition, re-executing the step of performing fusion splicing processing on every two initial feature maps to obtain the updated initial feature map of the image to be processed until the updated initial feature map meets the preset single condition, and taking the initial feature map meeting the preset single condition as the first type feature map.
4. The method according to claim 1, wherein the processing the first type feature map according to a preset hybrid attention algorithm to obtain an enhanced feature map comprises:
performing residual error calculation on the first type characteristic diagram to obtain target residual error characteristics;
and according to a preset mixed attention algorithm, performing enhancement processing on the target residual error characteristics to obtain an enhanced characteristic diagram.
5. The method according to claim 1, wherein the reconstructing an image according to the first fusion feature to obtain a target cloud-free image corresponding to the image to be processed comprises:
performing target times of upsampling processing on the first fusion characteristic to obtain recovered pixel data;
and performing the second type convolution calculation on the recovered pixel data to obtain a target cloud-free image corresponding to the image to be processed.
6. The method of claim 4, wherein the cloud removal model comprises a residual calculation module comprising a fusion unit and a plurality of residual calculation units;
the step of performing residual calculation on the first type feature map to obtain a target residual feature comprises:
performing residual calculation on the first type characteristic diagram through the fusion unit and the plurality of residual calculation units included in the residual calculation module to obtain target residual characteristics;
the input end and the output end of each residual error calculation unit are sequentially connected, the output end of the first residual error calculation unit is fused with the output end of a target residual error calculation unit in the plurality of residual error calculation units and then connected with the input end of the next residual error calculation unit of the target residual error calculation unit, and the output end of the first residual error calculation unit is fused with the output end of the last residual error calculation unit and then connected with the input end of the fusion unit.
7. The method of claim 4, wherein the preset mixed attention algorithm comprises a preset pooling algorithm and a preset channel algorithm; the enhancing the target residual error feature according to a preset mixed attention algorithm to obtain an enhanced feature map, including:
performing pooling calculation on the target residual error according to a preset pooling algorithm to obtain an initial spatial feature;
according to a preset channel algorithm, channel calculation is carried out on the target residual error to obtain initial channel characteristics;
and determining an enhanced feature map according to the initial spatial feature and the initial channel feature.
8. The method of claim 1, further comprising:
acquiring training data, wherein the training data comprises a plurality of groups of image pairs, and the image pairs comprise a sample cloud image and a sample cloud-free image;
inputting the cloud image of the sample to a cloud removal model to be trained to obtain a predicted cloud-free image;
calculating a target loss value through a loss function according to the sample cloud-free image and the predicted cloud-free image;
and updating the network parameters of the cloud removal model to be trained according to the target loss value, and returning to the step of executing the training data acquisition until the target loss value meets the training completion condition to obtain the trained cloud removal model.
9. The method of claim 8, wherein said calculating a target loss value from said sample cloud-free image and said predicted cloud-free image by a loss function comprises:
calculating a forward cloud removal loss value, a reverse cloud loss value, a cycle consistency loss value and a perception loss value according to the sample cloud-free image and the predicted cloud-free image respectively through a forward cloud removal loss function, a reverse cloud loss function, a cycle consistency loss function and a perception loss function;
and superposing the forward cloud removal loss value, the reverse cloud loss value, the cycle consistency loss value and the perception loss value to obtain a target loss value.
10. A cloud removal apparatus, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed is a remote sensing image with cloud cover;
the convolution calculation module is used for performing first type convolution calculation on the image to be processed based on the pre-trained cloud removal model to obtain a first type characteristic diagram, and performing second type convolution calculation on the image to be processed to obtain a second type characteristic diagram;
the enhancement module is used for processing the first type characteristic diagram according to a preset mixed attention algorithm to obtain an enhanced characteristic diagram;
the dense residual calculation module is used for performing dense residual calculation on the second type characteristic diagram according to a preset dense residual algorithm to obtain a dense residual;
the fusion module is used for carrying out fusion processing on the enhanced feature map and the dense residual error to obtain a first fusion feature;
and the reconstruction module is used for reconstructing an image according to the first fusion characteristic to obtain a target cloud-free image corresponding to the image to be processed.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
13. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 9 when executed by a processor.
CN202210410940.2A 2022-04-19 2022-04-19 Cloud removing method and device, computer equipment and storage medium Pending CN114742733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210410940.2A CN114742733A (en) 2022-04-19 2022-04-19 Cloud removing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210410940.2A CN114742733A (en) 2022-04-19 2022-04-19 Cloud removing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114742733A true CN114742733A (en) 2022-07-12

Family

ID=82282347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210410940.2A Pending CN114742733A (en) 2022-04-19 2022-04-19 Cloud removing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114742733A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222629A (en) * 2022-08-08 2022-10-21 西南交通大学 Single remote sensing image cloud removing method based on cloud thickness estimation and deep learning
CN116416469A (en) * 2023-04-10 2023-07-11 中国气象局人工影响天气中心 Method, device, computer equipment and storage medium for identifying ice crystal image based on target area
CN116563147A (en) * 2023-05-04 2023-08-08 北京联合大学 Underwater image enhancement system and method
CN117649358A (en) * 2024-01-30 2024-03-05 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222629A (en) * 2022-08-08 2022-10-21 西南交通大学 Single remote sensing image cloud removing method based on cloud thickness estimation and deep learning
CN115222629B (en) * 2022-08-08 2023-05-05 西南交通大学 Cloud thickness estimation and deep learning-based single remote sensing image cloud removal method
CN116416469A (en) * 2023-04-10 2023-07-11 中国气象局人工影响天气中心 Method, device, computer equipment and storage medium for identifying ice crystal image based on target area
CN116416469B (en) * 2023-04-10 2023-10-24 中国气象局人工影响天气中心 Method, device, computer equipment and storage medium for identifying ice crystal image based on target area
CN116563147A (en) * 2023-05-04 2023-08-08 北京联合大学 Underwater image enhancement system and method
CN116563147B (en) * 2023-05-04 2024-03-26 北京联合大学 Underwater image enhancement system and method
CN117649358A (en) * 2024-01-30 2024-03-05 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN117649358B (en) * 2024-01-30 2024-04-16 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114742733A (en) Cloud removing method and device, computer equipment and storage medium
CN111723732B (en) Optical remote sensing image change detection method, storage medium and computing equipment
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
US20200357096A1 (en) Deep-learning based structure reconstruction method and apparatus
Wang et al. Laplacian pyramid adversarial network for face completion
Zhang et al. LR-Net: Low-rank spatial-spectral network for hyperspectral image denoising
Liu et al. Effective image super resolution via hierarchical convolutional neural network
Xiao et al. Physics-based GAN with iterative refinement unit for hyperspectral and multispectral image fusion
CN115439325A (en) Low-resolution hyperspectral image processing method and device and computer program product
Wang et al. Convolutional LSTM-based hierarchical feature fusion for multispectral pan-sharpening
Du et al. Blind image denoising via dynamic dual learning
CN115375548A (en) Super-resolution remote sensing image generation method, system, equipment and medium
Ran et al. RGAN: Rethinking generative adversarial networks for cloud removal
Chen et al. Fusion of Hyperspectral-Multispectral images joining Spatial-Spectral Dual-Dictionary and structured sparse Low-rank representation
Zhou et al. A superior image inpainting scheme using Transformer-based self-supervised attention GAN model
Guo et al. Blind single-image-based thin cloud removal using a cloud perception integrated fast Fourier convolutional network
Ye et al. Bayesian nonlocal patch tensor factorization for hyperspectral image super-resolution
CN111311732B (en) 3D human body grid acquisition method and device
CN111476739B (en) Underwater image enhancement method, system and storage medium
Fang et al. Learning explicit smoothing kernels for joint image filtering
Mo et al. SAUNet3+ CD: A Siamese-attentive UNet3+ for change detection in remote sensing images
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
CN115630660B (en) Barcode positioning method and device based on convolutional neural network
CN115311550B (en) Remote sensing image semantic change detection method and device, electronic equipment and storage medium
Zhang et al. Local-aware coupled network for hyperspectral image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination