CN111462012A - SAR image simulation method for generating countermeasure network based on conditions - Google Patents
SAR image simulation method for generating countermeasure network based on conditions Download PDFInfo
- Publication number
- CN111462012A CN111462012A CN202010256351.4A CN202010256351A CN111462012A CN 111462012 A CN111462012 A CN 111462012A CN 202010256351 A CN202010256351 A CN 202010256351A CN 111462012 A CN111462012 A CN 111462012A
- Authority
- CN
- China
- Prior art keywords
- image
- sar image
- network
- sar
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000004088 simulation Methods 0.000 title claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 47
- 238000001914 filtration Methods 0.000 claims abstract description 42
- 238000012360 testing method Methods 0.000 claims abstract description 21
- 238000006243 chemical reaction Methods 0.000 claims abstract description 13
- 238000005096 rolling process Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 6
- 230000003287 optical effect Effects 0.000 claims description 49
- 230000006870 function Effects 0.000 claims description 40
- 238000004422 calculation algorithm Methods 0.000 claims description 28
- 230000004913 activation Effects 0.000 claims description 14
- 230000000694 effects Effects 0.000 claims description 14
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 11
- 238000012937 correction Methods 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 8
- 230000003321 amplification Effects 0.000 claims description 7
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 7
- 102000002508 Peptide Elongation Factors Human genes 0.000 claims description 4
- 108010068204 Peptide Elongation Factors Proteins 0.000 claims description 4
- 230000003247 decreasing effect Effects 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 239000004576 sand Substances 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 2
- 238000005728 strengthening Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 abstract description 4
- 230000007547 defect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 12
- 238000003384 imaging method Methods 0.000 description 5
- 230000002146 bilateral effect Effects 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an SAR image simulation technology based on a condition generation countermeasure network, in particular to an SAR image simulation method based on a condition generation countermeasure network. Secondly, a training sample set is made by using the registered pairs of heterogeneous images. And finally, constructing an image conversion network, and training and testing the model on the basis of the data set. The method carries out rolling guide filtering processing on the original SAR image, removes the influence of speckle noise on the learning process of a generator, builds a generator network on the basis of the U-Net network, introduces a residual error network, solves the defect of insufficient depth of the U-Net network, and can effectively reduce the registration difficulty on the problem of heterogeneous image registration.
Description
Technical Field
The invention belongs to the technical field of SAR image simulation based on a condition generation countermeasure network, and particularly relates to an SAR image simulation method based on a condition generation countermeasure network.
Background
With the development of remote sensing observation technology, the types of multi-source remote sensing images are more and more abundant. In image registration, images acquired by imaging the same region with different sensors are referred to as heterogeneous images. Because the imaging principle, the processing mechanism, the sensor parameters and the like of the sensors are greatly different, the correlation between the two different source images is small, and therefore, a good result cannot be achieved if the registration algorithm aiming at the same source image is simply used for processing the different source images. For realizing the registration of the heterogeneous images, the key technology is to convert the heterogeneous images into the homogeneous images so as to reduce the difficulty of registration.
The optical image has better visual experience, presents rich information such as texture characteristics, gray scale and the like, and can obtain a high-quality visible light image through the optical sensor under the conditions of sufficient light and wide visual field. However, in poor weather conditions (such as insufficient light, cloud and fog occlusion, etc.), the application value of the low-quality optical image is lost. While SAR images are making up for this deficiency. Synthetic Aperture Radar (SAR) imaging can be free from the influence of factors such as illumination, weather and the like, and SAR imaging can reach extremely high resolution. By combining the imaging advantages of optics and SAR sensors, different data carried in different-source images are fused with each other, and important application value can be generated in specific scenes such as natural disaster monitoring and target detection.
The technical difficulties of SAR image generation include ① influence of speckle noise of SAR images and construction of image conversion framework from optical images ② to pseudo SAR.
Disclosure of Invention
The invention aims to provide an SAR image simulation method for generating a countermeasure network based on conditions.
In order to achieve the purpose, the invention adopts the technical scheme that: a SAR image simulation method for generating a countermeasure network based on conditions comprises the following steps:
step 1, denoising a target SAR image by using a rolling guide filtering method;
step 2, making a data set by the optical image and the corresponding preprocessed SAR image; the method comprises the following substeps:
step 2.1, reading in the registered heterogeneous image pair;
2.2, extracting feature points in the optical image, intercepting image blocks with the same size on the corresponding SAR image by taking pixel points at the same coordinate position as centers, performing data amplification after several groups of heterogeneous images are processed, merging the optical image and the SAR image, and finally dividing a training set and a test set;
step 3, building an image conversion network structure for generating a countermeasure network based on the condition; the method comprises the following substeps:
step 3.1, optimizing the condition generation countermeasure network by combining Res-Net based on the U-Net network structure;
3.2, constructing a five-layer convolution discriminator network structure;
step 4, taking a training set formed by the optical image and the SAR image as input, iteratively training the model for multiple times, and optimizing a target function by using an Adam algorithm;
and 5, converting and testing the SAR effect graph on the test set.
In the above method for simulating an SAR image based on a conditional generation countermeasure network, the implementation of step 1 includes: performing Gaussian filtering on an original SAR image, taking the filtered image as a guide image, performing iterative filtering operation on the guide image on the basis, and recovering the edge of an object in a large area in the image, wherein the specific steps are as follows;
step 1.1, performing Gaussian filtering on an original SAR image; filtering out spots and fine structures in a small area, wherein the expression is as follows:
in the formula (1), J1(p) represents the pixel value of a pixel point p after Gaussian filtering, N (p) represents a neighborhood around the point for filtering calculation, q represents all pixel points involved in the neighborhood, kpIs a normalization coefficient for maintaining the range of the calculation result;
step 1.2, performing subsequent iteration in a guiding mode, and strengthening the edge structure of the image, wherein the mathematical expression is as follows:
in the formula (2), Jt(p) representing the pixel value of the pixel point p after the t-th filtering; j. the design is a squaret-1(p)、Jt-1(q) respectively representing the pixel values of the pixel points p and q after t-1 filtering;sandrrepresenting a spatial scale and a distance scale, respectively.
In the above method for simulating an SAR image based on a conditional generation countermeasure network, the step 2 is implemented by:
step 2.1, reading in the registered heterogeneous image pair, extracting all possible SIFT feature points from each optical image by adopting an SIFT method, removing pixel points with the distance between the two feature points being smaller than d, adjusting the density of the SIFT feature points acquired in the image by using d, and taking d as 20;
2.2, after screening, intercepting image blocks with the size of 256 × 256 by taking each selected feature point as a center, and discarding the selected area if the selected area exceeds the image boundary;
step 2.3, after several groups of heterogeneous images are processed, rotating, mirroring and turning all the images in the two folders of the optical image and the SAR image to complete data amplification;
step 2.4, combining the images in the two folders of the optical image and the SAR image into one 512 × 256 image, and storing the 512 × image in the other folder;
and 2.5, randomly dividing the sample in the other folder in the step 2.4 into a training set and a testing set according to the ratio of 80%/20%.
In the above SAR image simulation method based on the condition-generated countermeasure network, the generator of the condition-generated countermeasure network in step 3.1 adopts a U-Net network structure, and includes an encoder module and a decoder module;
the encoder module comprises 8 layers, each layer is a double (conv + bn + lrelu) + short structure, the number of convolution kernels is multiplied layer by layer from 64 until the number is unchanged after 512, bn is batch normalization optimization, lrelu represents that an activation function uses L eakyRe L U function, and short refers to shortcut in a residual error network;
the decoder module comprises 8 layers, the number of the convolution kernels is the same as that of units of the codec, the structure of each unit in the decoder is conv + bn + relu, the relu indicates that the activation function uses a Re L U function, the layers corresponding to the codec module are spliced, the size of the convolution kernel of each convolution operation is 3 × 3, and a2 x2 maximum pooling layer is connected between each unit.
In the SAR image simulation method based on the condition-generated countermeasure network, the five-layer convolution discriminator network in the step 3.2 adopts a PatchGAN structure, the PatchGAN divides an image into N × image blocks with fixed sizes, the five-layer convolution discriminator respectively judges the authenticity of each block, the response obtained in one image is averaged and then is used as an output result, the patch size is set to 70 x 70, the first four layers of the five-layer convolution discriminator network structure are used for carrying out feature extraction on the sample, the number of convolution kernels is increased from 64, the size of the convolution kernels is 3 × 3, the step size is 2, the last layer of convolution is used for mapping features to one-dimensional output, a Sigmoid function is used as an activation function, batch normalization processing is carried out after the first four layers of convolution, the activation function is L eakyRe L U, and the value is 0.2.
In the above method for simulating an SAR image based on a conditional generation countermeasure network, the implementation of step 4 includes:
step 4.1, generating a countermeasure network according to the loss function training condition as follows:
G'=arg minGmaxDLcGAN+λL1(5)
taking a real SAR image y as a constraint condition, recording random noise as z, and taking x as an input optical image, wherein x and y obey pdata(x, y) data distribution, random noise z obeys pz(z) data distribution LcGAN(G, D) represents the competing loss constraints of the generator and the arbiter,representing pixel-level constraints between the image blocks of the generator and the real image blocks, D (x, y) representing the matching prediction of the discriminator on x, y, G (x, z) representing the output image of the generator after input of the optical image and noise, D (x, G (x, z)) representing the matching prediction of the discriminator on x, G (x, z), and λ representing the introduced L1The pre-loss coefficient is set to 100, the generator is trained to LcGANMinimization, differentiator training results in LcGANMaximization;
step 4.2, the training of the conditional generation countermeasure network comprises the following steps:
step 4.2.1, initialization L1Loss over-parameter λ, total number of iterations t;
step 4.2.2, for i ═ 1,2,. and t do;
step 4.2.3, giving m pairs of sample images:
Step 4.2.5, update the parameters of the discriminator D and maximize the following formula:
Step 4.2.7, update the parameters of generator G and minimize the following:
Step 4.2.9, end;
wherein, IoRepresenting an optical image, IsRepresenting the corresponding SAR image, IgRepresenting the generated pseudo-SAR image;
step 4.3, the Adam algorithm optimizes the objective function according to the following formula:
mt=β1×mt-1+(1-β1)×gt(6)
wherein m istAnd vtRepresenting first and second order moment estimates of the gradient, β1And β2Denotes the elongation factor base, gtIs the gradient of the objective function at the time t-1 of the parameter theta,andis mt,vtIs a decimal constant, η represents the learning rate;
the Adam algorithm flow is as follows:
step 4.3.1, input η1,β2And a maximum cycle number epoch parameter;
step 4.3.2, initializing parameter θ when t is equal to 00And let the first moment estimate m00, second order moment estimation v0=0;
Step 4.3.3, updating the iteration times: t is t + 1;
step 4.3.4, select m samples { x ] in the training sample set(1),...,x(m)And its corresponding target sample is noted as y(i)Then proceed at θt-1Gradient calculation at time:
step 4.3.5, update mt:mt=β1×mt-1+(1-β1)×gt;
Step 4.3.9, update thetat:The step 4.4.3 to the step 4.3.8 are circulated until f (theta) converges or reaches the preset maximum circulation time epoch, and the optimal solution theta of f (theta) is returnedt。
In the SAR image simulation method based on the condition generation countermeasure network, the learning rate of the Adam optimization algorithmIs calculated as follows:
wherein η represents the initial value of the learning rate, epoch represents the total number of iterations, iter represents the current number of iterations, and offset represents the learning rate required to start decreasing in the training processWhen iter is smaller than offset, a preset larger η is used as the current learning rate, and when iter reaches offset, the learning rate is gradually reduced.
The invention has the beneficial effects that: (1) according to the invention, the rolling guide filtering algorithm is adopted to remove speckle noise in the original SAR image, and a data set is manufactured on the basis, so that the generated network learns more real characteristics in the SAR sample to be trained, and the influence caused by false speckle characteristics is reduced.
(2) A conditional generation countermeasure network is adopted to convert the optical image into a pseudo SAR image, and the problem of high difficulty in feature extraction caused by significant difference between heterogeneous images is solved. The registration difficulty can be effectively reduced on the problem of the registration of different-source images.
Drawings
FIG. 1 is a flow chart of one embodiment of the present invention;
FIG. 2 is a flow chart of a rolling guided filtering algorithm for de-noising SAR images in an embodiment of the present invention;
FIG. 3(a) is an original image according to one embodiment of the present invention;
FIG. 3(b) is a diagram illustrating the results of the bilateral filtering algorithm according to one embodiment of the present invention;
FIG. 3(c) is a diagram illustrating the results of a guided filtering algorithm according to an embodiment of the present invention;
FIG. 3(d) is a diagram illustrating the result of the nonlinear diffusion filtering algorithm according to one embodiment of the present invention;
FIG. 3(e) is a diagram illustrating the results of a rolling-guided filtering algorithm according to an embodiment of the present invention;
FIG. 4(a) is an optical image used in one embodiment of the present invention;
FIG. 4(b) is a SAR image used in one embodiment of the present invention;
FIG. 5(a) is a diagram of a first optical image effect generated by the generator in one embodiment of the present invention;
FIG. 5(b) is a diagram of a second optical image effect generated by the generator in one embodiment of the present invention;
FIG. 5(c) is a diagram of a third optical image effect generated by the generator in one embodiment of the present invention;
FIG. 5(d) is a diagram of the effect of a first real SAR image generated by the generator in one embodiment of the present invention;
FIG. 5(e) is a diagram of the effect of a second real SAR image generated by the generator in one embodiment of the present invention;
FIG. 5(f) is a diagram of the effect of a third real SAR image generated by the generator in one embodiment of the present invention;
FIG. 5(g) is a first pseudo SAR image effect map generated by the generator in one embodiment of the present invention;
FIG. 5(h) is a second pseudo SAR image effect graph generated by the generator in one embodiment of the present invention;
fig. 5(i) is a diagram of the effect of the third pseudo SAR image generated by the generator in an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
In order to overcome the problem of high difficulty in the registration technology caused by the difference between different source images, the present embodiment first preprocesses the SAR image that needs to be used for making the data set, so that after coherent speckle noise is removed, a more real feature is learned by the generated network, and the influence caused by the false speckle feature is reduced. Secondly, a training sample set is made by using the registered pairs of heterogeneous images. And finally, constructing an image conversion network, and training and testing the model on the basis of the data set. The method comprises the following specific steps:
s100, denoising the target SAR image by using a rolling guide filtering method;
s200, making the optical image and the corresponding preprocessed SAR image into a training sample set. The method comprises the following specific steps:
s210, reading in the registered heterogeneous image pair;
s220, extracting feature points in the optical image, intercepting image blocks with the same size on the corresponding SAR image by taking pixel points at the same coordinate position as centers, performing data amplification after several groups of heterogeneous images are processed, merging the optical image and the SAR image, and finally dividing a training set and a test set;
s300, constructing an image conversion network structure for generating a countermeasure network based on the condition. The method comprises the following specific steps:
s310, optimizing the generative network by taking the U-Net network structure as a basis and combining the advantages of Res-Net;
s320, constructing a discriminator network structure formed by five layers of convolutions;
s400, taking a training set formed by the optical image and the SAR image as input, iteratively training the model for multiple times, and optimizing a target function by using an Adam optimization algorithm;
and S500, converting and testing the SAR effect graph on the test set.
Further, the specific implementation manner of step S100 is as follows:
and denoising the original SAR image by adopting a rolling guide filtering algorithm. The algorithm is briefly described as follows: 1) gaussian filtering is carried out on the original SAR image, and complex small regions such as speckle noise in the image are smoothed; 2) and performing iterative filtering operation on the processed image serving as a guide image on the basis of the guide image to recover the object edge of a large area in the image. The specific algorithm flow is as follows:
(1) firstly, Gaussian filtering is carried out on an SAR image, and speckles and fine structures in a small area are filtered, wherein the expression is as follows:
in formula (1), J1(p) represents the pixel value of a pixel point p after Gaussian filtering, N (p) represents a neighborhood around the point for filtering calculation, q represents all pixel points involved in the neighborhood, kpIs a normalization coefficient for maintaining the range of the calculation result.
(2) On the basis, subsequent iteration is carried out in a guiding mode, the edge structure of the image is enhanced, and the mathematical expression is as follows:
in the formula (2), Jt(p) representing the pixel value of the pixel point p after the t-th filtering; j. the design is a squaret-1(p)、Jt-1(q) respectively representing the pixel values of the pixel points p and q after t-1 filtering;sandrrepresenting a spatial scale and a distance scale, respectively.
Further, the specific implementation manner of step S200 is as follows:
(1) reading in the registered heterogeneous image pair, extracting all possible SIFT feature points from each optical image by using an SIFT method, removing pixel points with a distance between two feature points smaller than d, and adjusting the density of the SIFT feature points acquired in the image by using d, wherein d is set to be 20 in the embodiment;
(2) after screening, taking each selected feature point as a center, intercepting an image block with the size of 256 × 256, if the selected area exceeds the image boundary, discarding the point, then intercepting image blocks with the same size on the corresponding SAR image by taking a pixel point at the same coordinate position as the center, and respectively storing the optical image and the SAR image into two corresponding folders by the same name;
(3) after several groups of heterogeneous images are processed, performing operations such as rotation, mirror image and turning on all images in the two folders of the optical image and the SAR image to complete data amplification;
(4) combining the images in the two folders of the optical image and the SAR image into one 512 × 256 image and storing the image in the other folder;
(5) randomly dividing the samples in the folder into a training set and a testing set according to a ratio of 80%/20%, and finally, in this embodiment, the number of training samples is 10464, the number of training data sets is 8372, and the number of testing data sets is 2092;
further, the specific implementation manner of step S310 is as follows:
the generator in the conditional generation countermeasure network in this embodiment adopts the idea of a U-Net network structure, the network structure includes an encoder module and a decoder module, the encoder module includes 8 layers, each layer is a double (conv + bn + lrelu) + short structure, the number of convolution kernels is unchanged after being multiplied from 64 layers to 512 layers, bn denotes batch normalization optimization, lrelu denotes an activation function of L eakyRe L U, short denotes a "shortcut" in a residual network, the decoder module is also 8 layers, the units with the same number of codec layers have the same number of convolution kernels, the difference is that the structure of each unit in the decoder is conv + bn + relu, relu denotes that the activation function uses a Re L U function, and layers corresponding to the codec convolutional modules are spliced.
Further, the specific implementation manner of step S320 is as follows:
the first four layers of convolutions are used for extracting features of a sample, the number of convolution kernels is increased from 64, the size of the convolution kernels is 3 ×, the step size is 2, the last layer of convolution is used for mapping the features to one-dimensional output, and a Sigmoid function is used as an activation function, batch normalization (Batchnormalization) processing is performed after the first four layers of convolution, the activation function is L eakyRe L U, and the value is 0.2.
Further, the specific implementation manner of step S400 is as follows:
for conventional GAN, a random noise is input and corresponding output data is generated. But there are no constraints between the input and output, which makes the generated data have a large uncertainty, which may deviate from the ideal generation goal. And the conditional generation countermeasure network (CGAN) adds an additional piece of information on the basis of the GAN, and the additional information is used as a constraint condition of the generation process, so that the output data of the generated network meets the expected requirement. The embodiment trains the generation of the countermeasure network according to the loss function of the following formula:
G'=arg minGmaxDLcGAN+λL1(5)′
taking a real SAR image y as a constraint condition, recording random noise as z, and taking x as an input optical image, wherein x and y obey pdata(x, y) data distribution, random noise z obeys pz(z) data distribution, wherein LcGAN(G, D) represents the competing loss constraints of the generator and the arbiter,representing pixel-level constraints between the image blocks of the generator and the real image blocks, D (x, y) representing the matching prediction of the discriminator on x, y, G (x, z) representing the output image of the noise post-generator and the input optical image, D (x, G (x, z)) representing the matching prediction of the discriminator on x, G (x, z), and λ representing the introduced L1The pre-loss coefficient, λ 100 in this embodiment, is set the generator training purpose is to make LcGANMinimization, training of the arbiter is such that LcGANMaximize by L1Loss and cGThe loss of AN is combined, and the low-frequency and high-frequency characteristics in the image are simultaneously focused, so that the quality of the generated sample is effectively improved.
For the generative countermeasure network structure proposed in the present embodiment, it is assumed that the optical image is represented as IoThe corresponding SAR image is IsThe generated pseudo SAR image is IgThe conditional generation countermeasure network CGAN training steps of this embodiment are as follows:
algorithm 1 conditional generation confrontation network CGAN training procedure:
1. initialization L1Loss over-parameter λ, total number of iterations t;
2.for i=1,2,...,t do;
3. giving m pairs of sample images:
5. update the parameters of arbiter D and maximize the following:
7. update the parameters of generator G and minimize the following:
9. and (6) ending.
For the objective function proposed in this embodiment, an Adam optimization algorithm is used for optimization. The formula involved is:
mt=β1×mt-1+(1-β1)×gt(6)′
wherein m istAnd vtRepresenting first and second order moment estimates of the gradient, β1And β2Denotes the elongation factor base, gtIs the gradient of the objective function at the time t-1 of the parameter theta,andis mt,vtThe correction of (a) is a fractional constant, η denotes the learning rate.
1. Input η1,β2And parameters such as the maximum cycle number epoch;
2. when t is 0, initializing parameter theta0And let the first moment estimate m00, second order moment estimation v0=0;
3. Number of update iterations: t is t + 1;
4. selecting m samples { x ] in the training sample set(1),...,x(m)And its corresponding target sample is noted as y(i)Then proceed at θt-1Gradient calculation at time:
5. update mt:mt=β1×mt-1+(1-β1)×gt;
9. Updating thetat:The step 3 to the step 8 are circulated until f (theta) converges or reaches the preset maximum circulation time epoch, and the optimal solution theta of f (theta) is returnedt。
In order to accelerate the convergence time of the model, the present embodiment changes the fixed learning rate into dynamic adjustment based on the Adam optimization algorithm. Learning rateIs calculated as follows:
wherein η represents the initial value of the learning rate, epoch represents the total number of iterations, iter represents the current number of iterations, and offset represents the learning rate required to start decreasing in the training processWhen iter is smaller than offset, a preset larger η is used as the current learning rate, so that the target function can obtain a better solution quickly, and when iter reaches offset, the learning rate is gradually reduced, and the optimal solution is prevented from oscillating near a minimum value.
In specific implementation, as shown in fig. 1, the technical solution adopted in this embodiment includes the following key parts and techniques:
a first part: and preprocessing the SAR image. In the embodiment, a rolling guide filtering algorithm suitable for the SAR image is adopted to carry out speckle suppression on the SAR. The process flow is shown in fig. 2. The specific process of the algorithm comprises the following steps:
(1) firstly, Gaussian filtering is carried out on an SAR image to filter out speckles and fine structures in a small area, and the expression is as follows:
in formula (12), J1(p) represents the size of the pixel value of a pixel point p after Gaussian filtering, N (p) represents the field of the pixel point which is used for filtering calculation, q represents the pixel point related to the field, kpIs a normalization coefficient for maintaining the range of the calculation result.
(2) On the basis, a guiding mode is adopted for subsequent iteration, the edge structure of the image is enhanced, and the expression is as follows:
in formula (13), Jt(p) representing the pixel size of the pixel point p after the t-th filtering; j. the design is a squaret-1(p)、Jt-1(q) respectively representing the pixel sizes of the pixel points p and q after t-1 filtering;sandrrepresenting a spatial scale and a distance scale, respectively.
3(a) -3 (e) are schematic diagrams illustrating results of the classical filtering algorithm and the rolling guide filtering algorithm; fig. 3(a) is an original image, fig. 3(b) is a bilateral filter image, fig. 3(c) is a guide filter image, fig. 3(d) is a nonlinear diffusion filter image, and fig. 3(e) is a scroll guide filter image.
A second part: the data set is manufactured by the following specific steps:
(1) reading in the registered heterogeneous image pair, extracting all possible SIFT feature points from each optical image by using an SIFT method, removing pixel points with a distance between two feature points smaller than d, and adjusting the density of the SIFT feature points acquired in the image by using d, wherein d is set to be 20 in the embodiment;
(2) after screening, taking each selected feature point as a center, intercepting an image block with the size of 256 × 256, if the selected area exceeds the image boundary, discarding the point, then intercepting image blocks with the same size on the corresponding SAR image by taking a pixel point at the same coordinate position as the center, and respectively storing the optical image and the SAR image into two corresponding folders by the same name;
(3) after several groups of heterogeneous images are processed, performing operations such as rotation, mirror image and turning on all images in the two folders of the optical image and the SAR image to complete data amplification;
(4) combining the images in the two folders of the optical image and the SAR image into one 512 × 256 image and storing the image in the other folder;
(5) randomly dividing the samples in the folder into a training set and a testing set according to a ratio of 80%/20%, and finally, in this embodiment, the number of training samples is 10464, the number of training data sets is 8372, and the number of testing data sets is 2092;
and a third part: constructing an image conversion network framework, wherein the image conversion network comprises a generator network and a discriminator network, and the image conversion network comprises:
(1) constructing a generator network:
the generator in the conditional generation countermeasure network in this embodiment adopts the idea of a U-Net network structure, the network structure includes an encoder module and a decoder module, the encoder module includes 8 layers, each layer is a double (conv + bn + lrelu) + short structure, the number of convolution kernels is unchanged after being multiplied from 64 layers to 512 layers, bn refers to batch normalization optimization, lrelu refers to an activation function of L eakyRe L U, short refers to a "shortcut" in a residual network, the decoder module is also 8 layers, the units with the same number of codec layers have the same number of convolution kernels, the difference is that the structure of each unit in the decoder is conv + bn + relu, relu refers to a Re L U function used by the activation function, and corresponding layers between the codec modules are spliced.
(2) Constructing a discriminator network:
the overall idea of PatchGAN is to divide the image into N × N image blocks of fixed size, the discriminator judges the truth of each block respectively, and finally averages the responses obtained in one image as an output result, and the PatchGAN can be used for better judging the local features of the image, and in the embodiment, the patch size is set to 70 x 70.
The fourth part: and (5) training the model. And training the image conversion network constructed by the third part by using the training samples for generating the countermeasure network obtained by the second part.
The invention trains and generates a countermeasure network according to the loss function of the following formula:
G'=arg minGmaxDLcGAN+λL1(16)′
taking a real SAR image y as a constraint condition, recording random noise as z, and taking x as an input optical image, wherein x and y obey pdata(x, y) data distribution, random noise z obeys pz(z) data distribution, wherein LcGAN(G, D) representation generator andthe opposition loss constraint of the discriminator,representing pixel-level constraints between the image blocks of the generator and the real image blocks, D (x, y) representing the matching prediction of the discriminator on x, y, G (x, z) representing the output image of the noise post-generator and the input optical image, D (x, G (x, z)) representing the matching prediction of the discriminator on x, G (x, z), and λ representing the introduced L1The pre-loss coefficient, λ 100 in this embodiment, is set the generator training purpose is to make LcGANMinimization, training of the arbiter is such that LcGANAnd (4) maximizing.
Further, the present embodiment optimizes the objective function by using Adam optimization algorithm. The formula involved is:
mt=β1×mt-1+(1-β1)×gt(17)′
wherein m istAnd vtFirst and second moment estimates representing gradients, β1And β2Denotes the elongation factor base, gtIs the gradient of the objective function at the time t-1 of the parameter theta,andis mt,vtThe correction of (a) is a fractional constant, η denotes the learning rate.
In order to accelerate the convergence time of the model, the present embodiment changes the fixed learning rate into dynamic adjustment based on the Adam optimization algorithm. Learning rateIs calculated as follows:
wherein η represents the initial value of the learning rate, epoch represents the total number of iterations, iter represents the current number of iterations, and offset represents the learning rate required to start decreasing in the training processWhen iter is smaller than offset, a preset larger η is used as the current learning rate, so that the target function can obtain a better solution quickly, and when iter reaches offset, the learning rate is gradually reduced, and the optimal solution is prevented from oscillating near a minimum value.
The fifth part is that: and (5) testing the image conversion effect. The effect of this embodiment will be further described with reference to simulation experiments.
1. Simulation experiment environment:
(1) computer configuration:
the system type is as follows: ubuntu 64 bit operating system.
A display card: NVIDIA GEFORCE GTX 1050ti
(2) Experimental Environment and framework
A frame: tensorflow-1.7.0
Python version: python3.5
2. And (3) analyzing the experimental content and the result:
fig. 4(a) and 4(b) are images of a region of the shanghai from which the optical image and the SAR image were captured, respectively, and which are 1024 × 1024, the two images being registered images of different sources, the data set including training samples and test samples for dicing the images.
In the experiment, the existing network framework of image conversion is compared with the embodiment, a data set containing an unpreprocessed SAR image is used for training pix2pix, cycleGAN and the embodiment, and the data set containing the unpreprocessed SAR image is evaluated by the same test set and respectively marked as a1, a2 and A3, and meanwhile, the data set containing the preprocessed SAR image is used for training the model provided by the embodiment and is marked as a 4. The following table shows the evaluation results:
table 1 similarity evaluation of simulation experiment test set
As can be seen from table 1, the image transformation framework provided in this embodiment can effectively improve the SSIM structural similarity index of the image. The method has stronger feature extraction capability and faster network convergence speed, can generate a better-effect pseudo SAR image about 200 th time, and has higher SSIM index, which shows that the framework provided by the embodiment can effectively realize the conversion from the optical image to the pseudo SAR image.
Fig. 5(a), 5(b), and 5(c) are first, second, and third optical images, respectively, fig. 5(d), 5(e), and 5(f) are first, second, and third real SAR images, respectively, and fig. 5(g), 5(h), and 5(i) are first, second, and third pseudo SAR images generated by the generator, respectively.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
Although specific embodiments of the present invention have been described above with reference to the accompanying drawings, it will be appreciated by those skilled in the art that these are merely illustrative and that various changes or modifications may be made to these embodiments without departing from the principles and spirit of the invention. The scope of the invention is only limited by the appended claims.
Claims (7)
1. A SAR image simulation method for generating a countermeasure network based on conditions is characterized by comprising the following steps:
step 1, denoising a target SAR image by using a rolling guide filtering method;
step 2, making a data set by the optical image and the corresponding preprocessed SAR image; the method comprises the following substeps:
step 2.1, reading in the registered heterogeneous image pair;
2.2, extracting feature points in the optical image, intercepting image blocks with the same size on the corresponding SAR image by taking pixel points at the same coordinate position as centers, performing data amplification after several groups of heterogeneous images are processed, merging the optical image and the SAR image, and finally dividing a training set and a test set;
step 3, building an image conversion network structure for generating a countermeasure network based on the condition; the method comprises the following substeps:
step 3.1, optimizing the condition generation countermeasure network by combining Res-Net based on the U-Net network structure;
3.2, constructing a five-layer convolution discriminator network structure;
step 4, taking a training set formed by the optical image and the SAR image as input, iteratively training the model for multiple times, and optimizing a target function by using an Adam algorithm;
and 5, converting and testing the SAR effect graph on the test set.
2. The SAR image simulation method based on the conditional generation countermeasure network as claimed in claim 1, wherein the implementation of step 1 comprises: performing Gaussian filtering on an original SAR image, taking the filtered image as a guide image, performing iterative filtering operation on the guide image on the basis, and recovering the edge of an object in a large area in the image, wherein the specific steps are as follows;
step 1.1, performing Gaussian filtering on an original SAR image; filtering out spots and fine structures in a small area, wherein the expression is as follows:
in the formula (1), J1(p) represents the pixel value of a pixel point p after Gaussian filtering, N (p) represents a neighborhood around the point for filtering calculation, q represents all pixel points involved in the neighborhood, kpIs a normalization coefficient for maintaining the range of the calculation result;
step 1.2, performing subsequent iteration in a guiding mode, and strengthening the edge structure of the image, wherein the mathematical expression is as follows:
in the formula (2), Jt(p) representing the pixel value of the pixel point p after the t-th filtering; j. the design is a squaret-1(p)、Jt-1(q) respectively representing the pixel values of the pixel points p and q after t-1 filtering;sandrrepresenting a spatial scale and a distance scale, respectively.
3. The SAR image simulation method based on the conditional generation countermeasure network as claimed in claim 1, wherein the implementation of step 2 comprises:
step 2.1, reading in the registered heterogeneous image pair, extracting all possible SIFT feature points from each optical image by adopting an SIFT method, removing pixel points with the distance between the two feature points being smaller than d, adjusting the density of the SIFT feature points acquired in the image by using d, and taking d as 20;
2.2, after screening, intercepting image blocks with the size of 256 × 256 by taking each selected feature point as a center, and discarding the selected area if the selected area exceeds the image boundary;
step 2.3, after several groups of heterogeneous images are processed, rotating, mirroring and turning all the images in the two folders of the optical image and the SAR image to complete data amplification;
step 2.4, combining the images in the two folders of the optical image and the SAR image into one 512 × 256 image, and storing the 512 × image in the other folder;
and 2.5, randomly dividing the sample in the other folder in the step 2.4 into a training set and a testing set according to the ratio of 80%/20%.
4. The SAR image simulation method based on the condition generating countermeasure network of claim 1, wherein the generator of the condition generating countermeasure network of step 3.1 adopts a U-Net network structure, comprising an encoder module and a decoder module;
the encoder module comprises 8 layers, each layer is a double (conv + bn + lrelu) + short structure, the number of convolution kernels is multiplied layer by layer from 64 until the number is unchanged after 512, bn is batch normalization optimization, lrelu represents that an activation function uses L eakyRe L U function, and short refers to shortcut in a residual error network;
the decoder module comprises 8 layers, the number of the convolution kernels is the same as that of units of the codec, the structure of each unit in the decoder is conv + bn + relu, the relu indicates that the activation function uses a Re L U function, the layers corresponding to the codec module are spliced, the size of the convolution kernel of each convolution operation is 3 × 3, and a2 x2 maximum pooling layer is connected between each unit.
5. The SAR image simulation method based on the condition-generated countermeasure network of claim 1, wherein in step 3.2, the five-layer convolution discriminator network adopts a PatchGAN structure, the PatchGAN divides an image into N × image blocks with fixed sizes, the five-layer convolution discriminator respectively judges the authenticity of each block, and finally averages the response obtained in one image to obtain an output result, the patch size is set to 70, the first four layers of the five-layer convolution discriminator network structure are used for carrying out feature extraction on the sample, the number of convolution kernels is increased from 64, the size of the convolution kernels is 3 × 3, the step size is 2, the last layer of convolution is used for mapping the features to one-dimensional output, a Sigmoid function is used as an activation function, batch normalization processing is carried out after the first four layers of convolution, the activation function is L eakyRe L U, and the value is 0.2.
6. The SAR image simulation method based on condition-generated countermeasure network as claimed in claim 1, wherein the implementation of step 4 comprises:
step 4.1, generating a countermeasure network according to the loss function training condition as follows:
G'=argminGmaxDLcGAN+λL1(5)
taking a real SAR image y as a constraint condition, recording random noise as z, and taking x as an input optical image, wherein x and y obey pdata(x, y) data distribution, random noise z obeys pz(z) data distribution LcGAN(G, D) represents the competing loss constraints of the generator and the arbiter,representing pixel-level constraints between the image blocks of the generator and the real image blocks, D (x, y) representing the matching prediction of the discriminator on x, y, G (x, z) representing the output image of the generator after input of the optical image and noise, D (x, G (x, z)) representing the matching prediction of the discriminator on x, G (x, z), and lambda representing the introduction of the noiseL (g)1The pre-loss coefficient is set to 100, the generator is trained to LcGANMinimization, differentiator training results in LcGANMaximization;
step 4.2, the training of the conditional generation countermeasure network comprises the following steps:
step 4.2.1, initialization L1Loss over-parameter λ, total number of iterations t;
step 4.2.2, for i ═ 1,2,. and t do;
step 4.2.3, giving m pairs of sample images:
Step 4.2.5, update the parameters of the discriminator D and maximize the following formula:
Step 4.2.7, update the parameters of generator G and minimize the following:
Step 4.2.9, end;
wherein, IoRepresenting an optical image, IsRepresenting the corresponding SAR image, IgRepresenting the generated pseudo-SAR image;
step 4.3, the Adam algorithm optimizes the objective function according to the following formula:
mt=β1×mt-1+(1-β1)×gt(6)
wherein m istAnd vtRepresenting first and second order moment estimates of the gradient, β1And β2Denotes the elongation factor base, gtIs the gradient of the objective function at the time t-1 of the parameter theta,andis mt,vtIs a decimal constant, η represents the learning rate;
the Adam algorithm flow is as follows:
step 4.3.1, input η1,β2And a maximum cycle number epoch parameter;
step 4.3.2, initializing parameter θ when t is equal to 00And let the first moment estimate m00, second order moment estimation v0=0;
Step 4.3.3, updating the iteration times: t is t + 1;
step 4.3.4, select m samples { x ] in the training sample set(1),...,x(m)And its corresponding target sample is noted as y(i)Then proceed at θt-1Gradient calculation at time:
step 4.3.5, update mt:mt=β1×mt-1+(1-β1)×gt;
7. The SAR image simulation method based on condition-generated countermeasure network of claim 6, characterized in that learning rate of Adam optimization algorithmIs calculated as follows:
wherein η represents the initial value of the learning rate, epoch represents the total number of iterations, iter represents the current number of iterations, and offset represents the learning rate required to start decreasing in the training processWhen iter is smaller than offset, a preset larger η is used as the current learning rate, and when iter reaches offset, the learning rate is gradually reduced.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010256351.4A CN111462012A (en) | 2020-04-02 | 2020-04-02 | SAR image simulation method for generating countermeasure network based on conditions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010256351.4A CN111462012A (en) | 2020-04-02 | 2020-04-02 | SAR image simulation method for generating countermeasure network based on conditions |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111462012A true CN111462012A (en) | 2020-07-28 |
Family
ID=71685835
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010256351.4A Pending CN111462012A (en) | 2020-04-02 | 2020-04-02 | SAR image simulation method for generating countermeasure network based on conditions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111462012A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112115638A (en) * | 2020-08-28 | 2020-12-22 | 合肥工业大学 | Transformer fault diagnosis method based on improved Adam algorithm optimization neural network |
CN112862715A (en) * | 2021-02-08 | 2021-05-28 | 天津大学 | Real-time and controllable scale space filtering method |
CN113012251A (en) * | 2021-03-17 | 2021-06-22 | 厦门大学 | SAR image automatic colorization method based on generation countermeasure network |
CN113012077A (en) * | 2020-10-20 | 2021-06-22 | 杭州微帧信息科技有限公司 | Denoising method based on convolution guide graph filtering |
CN113487493A (en) * | 2021-06-02 | 2021-10-08 | 厦门大学 | SAR image automatic colorization method based on GANILA |
CN113554671A (en) * | 2021-06-23 | 2021-10-26 | 西安电子科技大学 | Method and device for converting SAR image into visible light image based on contour enhancement |
CN113687352A (en) * | 2021-08-05 | 2021-11-23 | 南京航空航天大学 | Inversion method for down-track interferometric synthetic aperture radar sea surface flow field |
CN113808180A (en) * | 2021-09-18 | 2021-12-17 | 中山大学 | Method, system and device for registering different-source images |
CN115082288A (en) * | 2022-05-16 | 2022-09-20 | 西安电子科技大学 | Conversion method from SAR image to optical image based on partial differential equation inspiration |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510532A (en) * | 2018-03-30 | 2018-09-07 | 西安电子科技大学 | Optics and SAR image registration method based on depth convolution GAN |
CN108564606A (en) * | 2018-03-30 | 2018-09-21 | 西安电子科技大学 | Heterologous image block matching method based on image conversion |
CN109166126A (en) * | 2018-08-13 | 2019-01-08 | 苏州比格威医疗科技有限公司 | A method of paint crackle is divided on ICGA image based on condition production confrontation network |
CN109345469A (en) * | 2018-09-07 | 2019-02-15 | 苏州大学 | It is a kind of that speckle denoising method in the OCT image of confrontation network is generated based on condition |
CN110264424A (en) * | 2019-06-20 | 2019-09-20 | 北京理工大学 | A kind of fuzzy retinal fundus images Enhancement Method based on generation confrontation network |
CN110570446A (en) * | 2019-09-20 | 2019-12-13 | 河南工业大学 | Fundus retina image segmentation method based on generation countermeasure network |
-
2020
- 2020-04-02 CN CN202010256351.4A patent/CN111462012A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510532A (en) * | 2018-03-30 | 2018-09-07 | 西安电子科技大学 | Optics and SAR image registration method based on depth convolution GAN |
CN108564606A (en) * | 2018-03-30 | 2018-09-21 | 西安电子科技大学 | Heterologous image block matching method based on image conversion |
CN109166126A (en) * | 2018-08-13 | 2019-01-08 | 苏州比格威医疗科技有限公司 | A method of paint crackle is divided on ICGA image based on condition production confrontation network |
CN109345469A (en) * | 2018-09-07 | 2019-02-15 | 苏州大学 | It is a kind of that speckle denoising method in the OCT image of confrontation network is generated based on condition |
CN110264424A (en) * | 2019-06-20 | 2019-09-20 | 北京理工大学 | A kind of fuzzy retinal fundus images Enhancement Method based on generation confrontation network |
CN110570446A (en) * | 2019-09-20 | 2019-12-13 | 河南工业大学 | Fundus retina image segmentation method based on generation countermeasure network |
Non-Patent Citations (3)
Title |
---|
ANERY PATEL ET AL.: "PolSAR Band-to-Band Image Translation Using Conditional Adversarial Networks", 《2019 IEEE SENSORS》, pages 1 - 4 * |
QIUZE YU ET AL.: "High-Performance SAR Image Matching Using Improved SIFT Framework Based on Rolling Guidance Filter and ROEWA-Powered Feature", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》, vol. 12, pages 920 - 933, XP011716843, DOI: 10.1109/JSTARS.2019.2897171 * |
苑希民 等著: "《神经网络和遗传算法在水科学领域的应用》", 31 August 2002, 北京:中国水利水电出版社, pages: 75 - 76 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112115638A (en) * | 2020-08-28 | 2020-12-22 | 合肥工业大学 | Transformer fault diagnosis method based on improved Adam algorithm optimization neural network |
CN112115638B (en) * | 2020-08-28 | 2023-09-26 | 合肥工业大学 | Transformer fault diagnosis method based on improved Adam algorithm optimization neural network |
CN113012077A (en) * | 2020-10-20 | 2021-06-22 | 杭州微帧信息科技有限公司 | Denoising method based on convolution guide graph filtering |
CN112862715A (en) * | 2021-02-08 | 2021-05-28 | 天津大学 | Real-time and controllable scale space filtering method |
CN113012251A (en) * | 2021-03-17 | 2021-06-22 | 厦门大学 | SAR image automatic colorization method based on generation countermeasure network |
CN113012251B (en) * | 2021-03-17 | 2022-05-03 | 厦门大学 | SAR image automatic colorization method based on generation countermeasure network |
CN113487493B (en) * | 2021-06-02 | 2023-08-18 | 厦门大学 | GANilla-based SAR image automatic colorization method |
CN113487493A (en) * | 2021-06-02 | 2021-10-08 | 厦门大学 | SAR image automatic colorization method based on GANILA |
CN113554671A (en) * | 2021-06-23 | 2021-10-26 | 西安电子科技大学 | Method and device for converting SAR image into visible light image based on contour enhancement |
CN113687352A (en) * | 2021-08-05 | 2021-11-23 | 南京航空航天大学 | Inversion method for down-track interferometric synthetic aperture radar sea surface flow field |
CN113808180A (en) * | 2021-09-18 | 2021-12-17 | 中山大学 | Method, system and device for registering different-source images |
CN113808180B (en) * | 2021-09-18 | 2023-10-17 | 中山大学 | Heterologous image registration method, system and device |
CN115082288B (en) * | 2022-05-16 | 2023-04-07 | 西安电子科技大学 | Conversion method from SAR image to optical image based on partial differential equation inspiration |
CN115082288A (en) * | 2022-05-16 | 2022-09-20 | 西安电子科技大学 | Conversion method from SAR image to optical image based on partial differential equation inspiration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462012A (en) | SAR image simulation method for generating countermeasure network based on conditions | |
Liu et al. | Learning converged propagations with deep prior ensemble for image enhancement | |
CN111062880A (en) | Underwater image real-time enhancement method based on condition generation countermeasure network | |
Liang et al. | Self-supervised low-light image enhancement using discrepant untrained network priors | |
CN107507135B (en) | Image reconstruction method based on coding aperture and target | |
CN109035172B (en) | Non-local mean ultrasonic image denoising method based on deep learning | |
WO2021136368A1 (en) | Method and apparatus for automatically detecting pectoralis major region in molybdenum target image | |
CN113902625A (en) | Infrared image enhancement method based on deep learning | |
CN113450396A (en) | Three-dimensional/two-dimensional image registration method and device based on bone features | |
CN116486288A (en) | Aerial target counting and detecting method based on lightweight density estimation network | |
Zhao et al. | Deep pyramid generative adversarial network with local and nonlocal similarity features for natural motion image deblurring | |
CN116664446A (en) | Lightweight dim light image enhancement method based on residual error dense block | |
Wei et al. | Non-homogeneous haze removal via artificial scene prior and bidimensional graph reasoning | |
Zhao et al. | A deep variational Bayesian framework for blind image deblurring | |
Wang et al. | Low-light image enhancement based on deep learning: a survey | |
CN113421210B (en) | Surface point Yun Chong construction method based on binocular stereoscopic vision | |
Chen et al. | Attention-based broad self-guided network for low-light image enhancement | |
Babu et al. | An efficient image dahazing using Googlenet based convolution neural networks | |
Tan et al. | High dynamic range imaging for dynamic scenes with large-scale motions and severe saturation | |
Yang et al. | CSDM: A cross-scale decomposition method for low-light image enhancement | |
CN117593275A (en) | Medical image segmentation system | |
Wang et al. | Structural prior guided generative adversarial transformers for low-light image enhancement | |
Liu et al. | Dual UNet low-light image enhancement network based on attention mechanism | |
CN116129417A (en) | Digital instrument reading detection method based on low-quality image | |
Shi et al. | LCA-Net: A Context-Aware Light-Weight Network For Low-Illumination Image Enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |