CN111931553A - Remote sensing data enhanced generation countermeasure network method, system, storage medium and application - Google Patents

Remote sensing data enhanced generation countermeasure network method, system, storage medium and application Download PDF

Info

Publication number
CN111931553A
CN111931553A CN202010496962.6A CN202010496962A CN111931553A CN 111931553 A CN111931553 A CN 111931553A CN 202010496962 A CN202010496962 A CN 202010496962A CN 111931553 A CN111931553 A CN 111931553A
Authority
CN
China
Prior art keywords
remote sensing
generator
sensing data
discriminator
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010496962.6A
Other languages
Chinese (zh)
Other versions
CN111931553B (en
Inventor
陈晨
马洪祥
吕宁
周扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202010496962.6A priority Critical patent/CN111931553B/en
Publication of CN111931553A publication Critical patent/CN111931553A/en
Application granted granted Critical
Publication of CN111931553B publication Critical patent/CN111931553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of remote sensing image processing, and discloses a remote sensing data enhanced generation confrontation network method, a system, a storage medium and application. The invention provides an improved downsampling module which can effectively reduce the semantic loss of an image. The invention improves the image generation speed and solves the problem of long time consumption of the algorithm. For the deep neural network, the larger the network parameter quantity is, the longer the operation time of the algorithm is, and the shorter the network time consumption of the smaller parameter quantity is, so the invention has the advantages that the generation model is divided into a plurality of sub-networks with similar structures, and the sub-network with the smaller parameter quantity is selected as the generation model under the condition of not large difference of the generation quality, thereby effectively improving the generation speed of the image.

Description

Remote sensing data enhanced generation countermeasure network method, system, storage medium and application
Technical Field
The invention belongs to the technical field of remote sensing image processing, and particularly relates to a method, a system, a storage medium and application for enhancing generation of a confrontation network by remote sensing data.
Background
At present, with the development of artificial intelligence, in the field of remote sensing image processing, professionals can complete tasks such as image classification and detection by means of a deep learning algorithm. The classification and detection results of the remote sensing images can be used for many aspects, such as detection of illegal buildings and detection of land use type changes. However, the remote sensing image is difficult to obtain, and is influenced by interference factors such as orbit period and the like in the shooting process, so that the contradiction is caused: the number of samples of the remote sensing image cannot meet the training requirement of the deep learning algorithm. Data enhancement techniques can effectively address this issue. The original data enhancement techniques include: cropping, scaling, color transformation, flipping, etc., which, while increasing the number of samples, do not change the diversity of the samples and produce lower quality images. In recent years, the depth generation model is widely applied to the field of image processing as a new data enhancement technology. The depth generation model can not only increase the number of samples, but also generate high-quality and diverse samples. Just because the deep learning algorithm has high requirements on the resolution and diversity of remote sensing image samples, the adoption of a deep generation model for sample quantity expansion is the best scheme for solving the contradiction.
The current depth generation model mainly includes two types: GAN (generic adaptive network) and VAE (variant auto-encoder). Compared with VAE, GAN does not need to specify data distribution, the training process is clear, and the generated image quality is higher. Therefore, GAN is chosen as the basic generative model.
The prior art proposes an algorithm for generating images from images based on a conditional generation countermeasure network. The algorithm learns the mapping G between the observed image x and the random noise z to the output image y: { x, z } - > y, thereby completing the generation of the image. The algorithm includes a generator for generating an image and a discriminator for discriminating between true and false of the generated image. The discriminators learn continuously to distinguish between real images and generated images, and the generators learn continuously to mask the discriminators. When the discriminator cannot distinguish whether the input image is a real image or a generated image, the generator can be used to perform image generation. The semantic information loss of the image generated by the prior art is large, the high-level semantic information is continuously lost due to the multiple down-sampling process of the image, the quality of the generated image is low, and great challenges are brought to the classification and detection tasks of the image. The image generation speed is low, the existing technical scheme consumes long time under the condition that the number of generated samples is large, when the data volume is overlarge, the generation tasks are few days, more weeks, and a large amount of time cost is wasted.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) the semantic information loss of the image in the prior art is large.
(2) The image generation speed in the prior art is slow, and a large amount of time cost is wasted.
The difficulty in solving the above problems and defects is:
1. the loss of high-level semantic information caused by multiple downsampling of images is reduced, the downsampling process of the prior art needs to be improved, and how to design a new downsampling module is one of the difficult problems.
2. How to optimize the network structure of the generation model in the prior art and reduce the number of network parameters is a difficult problem of improving the image generation speed.
The significance of solving the problems and the defects is as follows:
1. the quality of the generated images can be improved, the accuracy of image classification and detection tasks is increased, and the method is helpful for professionals to complete classification and detection tasks in related fields more quickly.
2. The method can improve the image generation speed, can obtain more generated samples in limited time, and greatly saves time cost because researchers do not need to spend a large amount of time waiting for the end of the generation process.
3. The method can realize the rapid generation of high-quality samples under the condition of less training samples, and solves the trouble that professionals cannot perform related work due to the lack of samples.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method, a system, a storage medium and an application for enhancing generation of a confrontation network by remote sensing data.
The invention is realized in such a way that a remote sensing data enhanced generation confrontation network method comprises the following steps:
firstly, a generator and a discriminator are constructed and are respectively used for generating and discriminating images.
Secondly, constructing a loss function and guiding training and learning of the confrontation network model;
thirdly, in the training process, the generator tries to reduce the loss function, and the discriminator tries to increase the loss function; when the generator and the discriminator reach the Nash equilibrium point, the generator generates a sample;
fourthly, the generator generates samples, and the discriminator judges the fidelity of the generated samples.
Further, the main structure of the generator is based on a Unet + + network, and the shallow features and the deep features adopt a long and short connection mode.
Further, the antagonistic network model includes: the device comprises a generator G, a discriminator D and a down-sampling module.
Further, the shallow features and the deep features of the generator G adopt a short connection mode or a long and short connection mode.
Further, the network structures of the plurality of discriminators D are the same and participate in training at the same time.
Further, the down-sampling module comprises a data normalization layer, a convolution layer, a feature map splicing layer and an activation layer.
Further, the loss function:
LcGAN(G,D)=Ex,y[logD(x,y)]+Ex,z[log(1-D(x,G*(x,z)))];
wherein D (x, y) represents the discrimination result of the discriminator on the real sample, D (x, G)*(x, z)) represents the discrimination result of the discriminator on the generated sample.
Further, D (x, G)*(x, z)) is represented by the following formula:
D(x,G*(x,z))=λ1D(x,G(x,z)1)+λ2D(x,G(x,z)2)+λ3D(x,G(x,z)3)+λ4D(x,G(x,z)4);
wherein λ isi(i takes 1,2,3,4) represents the weight of the ith sub-network, λ123+λ 41 and λ1<λ2<λ3<λ4
Further, the training includes: g stands for generator, which is used to generate samples, has multiple outputs, and is G (x, z)k(k is 1,2,3,4) and represents a generated sample of the kth sub-network output; d represents a discriminator for judging whether the input sample is a generated sample; z represents noise subject to a gaussian distribution, x represents a sample segmentation map, and y represents a true sample; and inputting a sample segmentation graph and random noise to obtain a vivid generated sample and finish the task of data enhancement.
It is another object of the present invention to provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
firstly, a generator and a discriminator are constructed and are respectively used for generating and discriminating images.
Secondly, constructing a loss function and guiding training and learning of the confrontation network model;
thirdly, in the training process, the generator tries to reduce the loss function, and the discriminator tries to increase the loss function; when the generator and the discriminator reach the Nash equilibrium point, the generator generates a sample;
fourthly, the generator generates samples, and the discriminator judges the fidelity of the generated samples.
Another object of the present invention is to provide a remote sensing data enhanced warfare network system operating the remote sensing data enhanced warfare network method, the remote sensing data enhanced warfare network system comprising:
a generator for generating a sample;
and the discriminator is used for discriminating the fidelity of the generated sample.
Further, the remote sensing data enhancement generation countermeasure network system further includes: a down-sampling module;
the down-sampling module respectively comprises the following components from top to bottom: a data normalization layer, a convolution layer, a characteristic diagram splicing layer and an activation layer;
the generator is connected to a plurality of discriminators, as shown in fig. 3, which is also a key to increase the image generation speed.
The invention also aims to provide a remote sensing image processing terminal, which carries the remote sensing data enhancement generation countermeasure network system.
By combining all the technical schemes, the invention has the advantages and positive effects that: aiming at the problem that the training requirement of the deep learning algorithm cannot be met due to the fact that the number of remote sensing image samples is too small, the invention provides a new generation confrontation network algorithm D-sGAN, and data enhancement is carried out on the existing samples. Aiming at the problems in the prior art, the invention mainly aims to provide the following components: firstly, semantic information loss of the generated image is reduced so as to improve the quality of the generated image; and secondly, the image generation speed is improved, and the problem of long time consumption of the algorithm is solved.
The invention provides a generation countermeasure network algorithm D-sGAN for remote sensing image data enhancement. The D-sGAN can effectively reduce the semantic loss of the image, generate more vivid samples and greatly improve the generation speed of the image while ensuring the image quality. How to reduce the semantic loss of the image. The method is characterized in that a new downsampling module is provided, a feature map splicing layer is added after a convolution layer, an image segmentation map is used for monitoring the feature map, and semantic loss caused by a downsampling process is reduced. How to improve the production speed while ensuring the production quality. The whole generation countermeasure network is divided into sub-networks with similar structures, the image generation quality of each sub-network is compared, and the sub-network with smaller network parameter amount is selected to generate the image under the condition that the generation quality is close.
The invention reduces the semantic information loss of the generated image and improves the quality of the generated image. The image segmentation graph is added in the down-sampling process for supervision, so that the semantic loss in the down-sampling process is corrected. Meanwhile, the generator network adopts a long and short connected Unet + + structure, and combines shallow features and deep features, thereby reducing semantic information loss of images and improving the quality of generated images. The invention improves the image generation speed and solves the problem of long time consumption of the algorithm. For the deep neural network, the larger the network parameter quantity is, the longer the operation time of the algorithm is, and the shorter the network time consumption of the smaller parameter quantity is, so the invention has the advantages that the generation model is divided into a plurality of sub-networks with similar structures, and the sub-network with the smaller parameter quantity is selected as the generation model under the condition of not large difference of the generation quality, thereby effectively improving the generation speed of the image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
FIG. 1 is a flow chart of a method for enhancing generation of a countermeasure network for remote sensing data according to an embodiment of the present invention.
FIG. 2 is a schematic structural diagram of a remote sensing data enhanced countermeasure network system provided by an embodiment of the invention;
in the figure: 1. a generator; 2. and a discriminator.
Fig. 3 is a schematic diagram of generating a countermeasure network model according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a training process provided in the embodiment of the present invention. G stands for generator, which is used to generate samples, has multiple outputs, and is G (x, z)k(k is 1,2,3,4) and represents a generated sample of the kth sub-network output; d represents a discriminator for judging whether the input sample is a generated sample; z represents noise subject to Gaussian distribution, x represents a sample segmentation map, and y represents trueA sample; and inputting a sample segmentation graph and random noise to obtain a vivid generated sample and finish the task of data enhancement.
Fig. 5 is a schematic diagram of a downsampling module according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of different sub-networks provided by the embodiment of the present invention.
FIG. 7 is a schematic diagram showing the production results of D-sGAN (1) and D-sGAN (2).
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a method, system, storage medium and application for enhancing generation of countermeasure network for remote sensing data, and the present invention is described in detail with reference to the accompanying drawings.
As shown in fig. 1, the method for enhancing generation of countermeasure network by remote sensing data provided by the invention comprises the following steps:
s101: and constructing a generator and a discriminator which are respectively used for generating and discriminating the image.
S102: constructing a loss function and guiding training and learning of the confrontation network model;
s103: during the training process, the generator tries to reduce the loss function, while the arbiter tries to increase the loss function; when the generator and the discriminator reach the Nash equilibrium point, the generator generates a sample;
s104: the generator generates samples, and the discriminator discriminates the fidelity of the generated samples.
As shown in fig. 2, the remote sensing data enhanced generation countermeasure network system provided by the invention comprises:
a generator 1 for generating samples.
And the discriminator 2 is used for discriminating the fidelity of the generated sample.
The technical solution of the present invention is further described below with reference to the accompanying drawings.
And (3) GAN: the method for generating the confrontation network (generic adaptive network) is a method for training a classifier in a semi-supervised mode, can help solve the problem of few samples of a labeled training set, and comprises a generator and a discriminator.
VAE: a variational auto-encoder (variational auto-encoder), an unsupervised learning model that learns complex distributions.
And (2) Unet: the deep learning network algorithms, originally used for image segmentation, are now also used for image classification and detection tasks.
Unet + +: a deep learning network algorithm is structurally improved in the Unet.
D-sGAN: the Deeply supervised created-confronted GAN (deep-supervised GAN) is the name of the algorithm for creating the confronted network proposed by the present invention.
The invention provides a new generation countermeasure network algorithm D-sGAN for data enhancement. The overall schematic diagram for generating the countermeasure network model proposed by the present invention is shown in fig. 3: in fig. 3, G represents the generator, D represents the discriminator, the grey squares represent the down-sampling module of the present invention, the dashed arrows represent the long and short connections between the feature maps, and the arrows tilted up represent the up-sampling process. The main structure of the generator G for generating a countermeasure network in fig. 3 is based on the Unet + + network, and the shallow and deep features use long and short connections, but it is also possible to implement the invention only using short connections.
The invention provides a generation countermeasure network algorithm D-sGAN, which comprises a generator G and a plurality of discriminators D with the same structure. The generator is used for generating samples, and the discriminator is used for discriminating the fidelity of the generated samples. In order to enable the whole model to generate images with higher quality, a lower loss function is constructed and used for guiding the training and learning process of the model.
LcGAN(G,D)=Ex,y[logD(x,y)]+Ex,z[log(1-D(x,G*(x,z)))];
Wherein D (x, y) represents the discrimination result of the discriminator on the real sample, D (x, G)*(x, z)) represents the discrimination result of the discriminator on the generated sample. In view ofThe generated countermeasure network algorithm D-sGAN comprises a plurality of discriminators D (x, G)*(x, z)) can be expressed by the following formula:
D(x,G*(x,z))=λ1D(x,G(x,z)1)+λ2D(x,G(x,z)2)+λ3D(x,G(x,z)3)+λ4D(x,G(x,z)4);
wherein λ isi(i takes 1,2,3,4) represents the weight of the ith sub-network, λ123+λ 41 and λ1<λ2<λ3<λ4
During the training process, the generator attempts to reduce the loss function, while the arbiter attempts to increase the loss function. When the generator and the arbiter reach the nash equilibrium point, the generator can be used to perform the generation of the samples.
The overall training flow diagram for generating the countermeasure network D-sGAN proposed by the present invention is shown in fig. 4. In FIG. 4, G represents a generator for generating samples having multiple outputs, G (x, z)k(k is 1,2,3,4) and represents a generated sample of the kth sub-network output. D represents a discriminator for judging whether the input sample is a generation sample. z represents noise subject to a gaussian distribution, x represents a sample segmentation map, and y represents a true sample. According to the flow chart of fig. 4, a lifelike generated sample can be obtained only by inputting a sample segmentation map and random noise, and the task of data enhancement is completed.
The technical solution of the present invention is further described with reference to the following specific examples.
Example 1: and semantic information loss of the generated image is reduced.
In the existing technical scheme, semantic loss of an image is caused by multiple scale transformations on the image in a downsampling process. Therefore, the invention provides a new downsampling module which can effectively reduce semantic loss. A schematic diagram of the downsampling module is shown in fig. 5.
In fig. 5, the designed down-sampling modules are respectively from top to bottom: a data normalization layer, a convolution layer, a characteristic diagram splicing layer and an activation layer. Compared with the original down-sampling process, the method has the advantages that the characteristic map splicing layer is added after the convolution layer, the segmentation map of the remote sensing image is used for supervising the corresponding characteristic map, and the semantic loss after the image convolution is reduced.
Example 2: improve the image generation speed and reduce the time consumption of image generation
The generator in the generation countermeasure network D-sGAN proposed by the present invention includes a plurality of discriminators D, as shown in fig. 3 above, which is also the key to increase the image generation speed. The challenge generation network of FIG. 3 comprises four sub-networks, each using D-sGAN L1,D-sGAN L2,D-sGAN L3,D-sGAN L4A detailed schematic diagram of the sub-network is shown in fig. 6.
In the prior art, the generative model has no process of differentiating sub-networks, which is represented in FIG. 6 in which the generative model only has D-sGAN L4The network, which results in the generation of too large number of parameters of the model, greatly reduces the speed of image generation.
The invention divides the generation model into a plurality of sub-networks with similar structures. During training, if the sub-network D-sGAN L1And subnetwork D-sGAN L4The quality of the output results is not much different, then the sub-network D-sGAN L1Can replace the subnetwork D-sGAN L4Image generation is performed similarly if the subnetwork D-sGAN L2、D-sGAN L3And subnetwork D-sGAN L4The quality of the output results is not much different, then the sub-network D-sGAN L2、D-sGAN L3Can replace the sub-network D-sGAN L4And generating an image. Due to sub-network D-sGAN L1,D-sGAN L2,D-sGAN L3The number of network parameters is far lower than that of D-sGAN L4Therefore, the image generation quality is ensured, and the image generation speed is greatly improved.
In the embodiment 1 of the present invention, which refers to an improved downsampling module, in order to verify the effectiveness of the downsampling module, the present invention compares the two effects of generating the countermeasure network (D-sGAN (1), D-sGAN (2)) on the test set. The difference between the two networks is: D-sGAN (2) uses the improved downsampling module of the present invention, and D-sGAN (1) does not use the improved downsampling module of the present invention. The effect pair ratio of both is shown in table 1 below.
TABLE 1 comparison of the effects of D-sGAN (1) and D-sGAN (2)
Figure BDA0002523257250000091
The Class IOU in the table represents the classification gain of the image, and generally speaking, the more semantic information the image contains, the larger the value of the Class IOU. As seen from the table, the classification gain of D-sGAN (2) is larger than that of D-sGAN (1), which shows that the downsampling module proposed by the present invention helps to reduce the semantic information loss of the image.
The quality of the image generation may reflect the effectiveness of the down-sampling module proposed by the present invention, which compares the generation results of D-sGAN (1) and D-sGAN (2) on the test set, as shown in fig. 7, and fig. 7 shows the generation results of D-sGAN (1) and D-sGAN (2).
As can be seen from fig. 7, the samples generated by D-sGAN (2) are clearer than the samples generated by D-sGAN (1), which shows that the downsampling module proposed by the present invention can improve the image generation quality.
To demonstrate that the present invention can reduce the loss of semantic information of the generated image, the present invention compares the effect of the proposed generative confrontation network algorithm D-sGAN with several existing generative confrontation network algorithms (CoGAN, SimGAN, CycleGAN), as shown in table 2.
Table 2 comparison of the present invention with the prior art
Figure BDA0002523257250000101
The Class IOU in Table 2 represents the classification gain of the image, and generally speaking, the more semantic information the image contains, the larger the value of the Class IOU. In the table, the Class IOU of D-sGAN is significantly larger than other technologies, and this result indicates that: compared with the prior art, the method for generating the confrontation network algorithm D-sGAN can reduce semantic information loss of the generated image and realize higher image classification precision.
To demonstrate that the present invention can increase the image generation speed and reduce the generation time, the present invention compares the effects of several sub-networks (D-sGAN L (1), D-sGAN L (2), D-sGAN L (3), D-sGAN L (4)) proposed to generate the countermeasure network algorithm D-sGAN, as shown in Table 3.
TABLE 3 Generation Effect of different sub-networks
Figure BDA0002523257250000102
The type of the test equipment used by the invention is NVIDIA TITAN Xp of a 12GB memory, and the size of a test data set is 20 k. The structure of the different sub-networks is shown in fig. 6. In the table, reference time represents the time taken by different subnetworks to generate an image, and Class IOU represents the classification gain of the image. Taking the subnetworks D-sGAN L (3) and D-sGAN L (4) as examples, D-sGAN L (3) reduces the generation time of 20s at the loss of classification gain of 0.01. This result illustrates that: under the condition that the classification gains are not large, the sub-networks divided by the method can reduce the time consumption of generation and improve the image generation speed.
In the description of the present invention, "a plurality" means two or more unless otherwise specified; the terms "upper", "lower", "left", "right", "inner", "outer", "front", "rear", "head", "tail", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, should not be construed as limiting the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A remote sensing data enhancement generation confrontation network method is characterized by comprising the following steps:
firstly, a generator and a discriminator are constructed and are respectively used for generating and discriminating images.
Secondly, constructing a loss function and guiding training and learning of the confrontation network model;
thirdly, in the training process, the generator tries to reduce the loss function, and the discriminator tries to increase the loss function; when the generator and the discriminator reach the Nash equilibrium point, the generator generates a sample;
fourthly, the generator generates samples, and the discriminator judges the fidelity of the generated samples.
2. The method for enhancing generation of a countermeasure network in remote sensing data of claim 1, wherein the countermeasure network model comprises: a generator G, a plurality of discriminators D and a down-sampling module; the shallow features and the deep features of the generator G adopt a short connection mode or a long and short connection mode.
3. The method for enhancing generative confrontation networks of remote sensing data as recited in claim 2 wherein said plurality of discriminators D are trained simultaneously with the same network structure.
4. The method for enhancing spanning confrontation network of remote sensing data as recited in claim 2 wherein said down-sampling module comprises a data normalization layer, a convolution layer, a signature graph stitching layer, an activation layer. And after convolution, using a segmentation graph of the remote sensing image to supervise a corresponding characteristic graph.
5. The remote sensing data enhanced generative confrontation network method of claim 1, wherein the loss function:
LcGAN(G,D)=Ex,y[logD(x,y)]+Ex,z[log(1-D(x,G*(x,z)))];
wherein D (x, y) represents the discrimination result of the discriminator on the real sample, D (x, G)*(x, z)) represents the discrimination result of the discriminator on the generated sample.
6. The method for enhancing generative confrontation network of remote sensing data of claim 5 wherein D (x, G)*(x, z)) is represented by the following formula:
D(x,G*(x,z))=λ1D(x,G(x,z)1)+λ2D(x,G(x,z)2)+λ3D(x,G(x,z)3)+λ4D(x,G(x,z)4);
wherein λ isi(i takes 1,2,3,4) represents the weight of the ith sub-network, λ12341 and λ1<λ2<λ3<λ4
7. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
firstly, a generator and a discriminator are constructed and are respectively used for generating and discriminating images.
Secondly, constructing a loss function and guiding training and learning of the confrontation network model;
thirdly, in the training process, the generator tries to reduce the loss function, and the discriminator tries to increase the loss function; when the generator and the discriminator reach the Nash equilibrium point, the generator generates a sample;
fourthly, the generator generates samples, and the discriminator judges the fidelity of the generated samples.
8. A remote sensing data enhanced generation countermeasure network system operating the remote sensing data enhanced generation countermeasure network method of any one of claims 1 to 7, characterized in that the remote sensing data enhanced generation countermeasure network system includes:
a generator for generating a sample;
and the discriminator is used for discriminating the fidelity of the generated sample.
9. The remote sensing data augmentation generation countermeasure network system of claim 8, wherein the remote sensing data augmentation generation countermeasure network system further comprises:
a down-sampling module;
the down-sampling module respectively comprises the following components from top to bottom: a data normalization layer, a convolution layer, a characteristic diagram splicing layer and an activation layer;
the generator is connected with a plurality of discriminators.
10. A remote sensing image processing terminal, characterized in that the remote sensing image processing terminal carries the remote sensing data enhanced generation countermeasure network system of claim 9.
CN202010496962.6A 2020-06-03 2020-06-03 Method, system, storage medium and application for enhancing generation of remote sensing data into countermeasure network Active CN111931553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010496962.6A CN111931553B (en) 2020-06-03 2020-06-03 Method, system, storage medium and application for enhancing generation of remote sensing data into countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010496962.6A CN111931553B (en) 2020-06-03 2020-06-03 Method, system, storage medium and application for enhancing generation of remote sensing data into countermeasure network

Publications (2)

Publication Number Publication Date
CN111931553A true CN111931553A (en) 2020-11-13
CN111931553B CN111931553B (en) 2024-02-06

Family

ID=73317110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010496962.6A Active CN111931553B (en) 2020-06-03 2020-06-03 Method, system, storage medium and application for enhancing generation of remote sensing data into countermeasure network

Country Status (1)

Country Link
CN (1) CN111931553B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205692A (en) * 2022-09-16 2022-10-18 成都戎星科技有限公司 Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN117612020A (en) * 2024-01-24 2024-02-27 西安宇速防务集团有限公司 SGAN-based detection method for resisting neural network remote sensing image element change

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN110287800A (en) * 2019-05-29 2019-09-27 河海大学 A kind of remote sensing images scene classification method based on SGSE-GAN
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal
CN110992262A (en) * 2019-11-26 2020-04-10 南阳理工学院 Remote sensing image super-resolution reconstruction method based on generation countermeasure network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN110287800A (en) * 2019-05-29 2019-09-27 河海大学 A kind of remote sensing images scene classification method based on SGSE-GAN
CN110992262A (en) * 2019-11-26 2020-04-10 南阳理工学院 Remote sensing image super-resolution reconstruction method based on generation countermeasure network
AU2020100274A4 (en) * 2020-02-25 2020-03-26 Huang, Shuying DR A Multi-Scale Feature Fusion Network based on GANs for Haze Removal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
毕晓君;潘梦迪;: "基于生成对抗网络的机载遥感图像超分辨率重建", 智能***学报, no. 01 *
苏健民;杨岚心;: "基于生成对抗网络的单帧遥感图像超分辨率", 计算机工程与应用, no. 12 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205692A (en) * 2022-09-16 2022-10-18 成都戎星科技有限公司 Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN115205692B (en) * 2022-09-16 2022-11-29 成都戎星科技有限公司 Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN117612020A (en) * 2024-01-24 2024-02-27 西安宇速防务集团有限公司 SGAN-based detection method for resisting neural network remote sensing image element change

Also Published As

Publication number Publication date
CN111931553B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
US11386293B2 (en) Training image signal processors using intermediate loss functions
CN111819568B (en) Face rotation image generation method and device
EP4206976A1 (en) Model training method and apparatus, body posture detection method and apparatus, and device and storage medium
US11775817B2 (en) Reinforcement learning-based techniques for training a natural media agent
CN111931553A (en) Remote sensing data enhanced generation countermeasure network method, system, storage medium and application
US20220237905A1 (en) Method and system for training a model for image generation
CN109712128A (en) Feature point detecting method, device, computer equipment and storage medium
CN113033719B (en) Target detection processing method, device, medium and electronic equipment
Wu et al. Evolutionary multitasking with solution space cutting for point cloud registration
CN117115447A (en) Forward-looking sonar image segmentation method and device based on meta-shift learning
CN112365551A (en) Image quality processing system, method, device and medium
CN111401528A (en) Simulated annealing genetic algorithm-based generation countermeasure network oversampling method and device
CN116563636A (en) Synthetic aperture radar image generation method and system
CN116468902A (en) Image processing method, device and non-volatile computer readable storage medium
CN113221858B (en) Method and system for defending face recognition against attack
KR20230167086A (en) Unsupervised learning of object representation in video sequences using spatial and temporal attention.
Benmalek et al. The neural painter: Multi-turn image generation
CN112446345A (en) Low-quality three-dimensional face recognition method, system, equipment and storage medium
CN117935291B (en) Training method, sketch generation method, terminal and medium for sketch generation model
CN114821119B (en) Method and device for training graph neural network model aiming at graph data invariant features
Phulpagar et al. Segmentation of noisy binary images containing circular and elliptical objects using genetic algorithms
CN117670878B (en) VOCs gas detection method based on multi-mode data fusion
WO2018232754A1 (en) Joint object detection based on collaborative information
JP2024070810A (en) Depth information estimation method and device
Qiu et al. PointAS: an attention based sampling neural network for visual perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant