CN111292260A - Construction method of evolutionary neural network and hyperspectral image denoising method based on evolutionary neural network - Google Patents

Construction method of evolutionary neural network and hyperspectral image denoising method based on evolutionary neural network Download PDF

Info

Publication number
CN111292260A
CN111292260A CN202010051941.3A CN202010051941A CN111292260A CN 111292260 A CN111292260 A CN 111292260A CN 202010051941 A CN202010051941 A CN 202010051941A CN 111292260 A CN111292260 A CN 111292260A
Authority
CN
China
Prior art keywords
network
chromosome
individuals
chromosomes
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010051941.3A
Other languages
Chinese (zh)
Inventor
闫超
孙亚楠
刘渝桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yifei Technology Co Ltd
Original Assignee
Sichuan Yifei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yifei Technology Co Ltd filed Critical Sichuan Yifei Technology Co Ltd
Priority to CN202010051941.3A priority Critical patent/CN111292260A/en
Publication of CN111292260A publication Critical patent/CN111292260A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for constructing an evolutionary neural network, which comprises the following steps: the network chromosome population of N network chromosomes is created and initialized by packaging the convolution layer, the reflection filling layer and the batch normalization layer; calculating and obtaining the mean square error of any network chromosome in the network chromosome population, and presetting the threshold value of the mean square error; selecting two parent network chromosome individuals by adopting a twice relaxation binary competition selection algorithm; aligning and crossing the two parent network chromosome individuals to obtain two offspring network chromosomes; selecting any one of the network chromosomes of the offspring for mutation until offspring populations of N network chromosome individuals are obtained; and (N-x) network chromosome individuals are selected from the network chromosome populations of the N network chromosomes and the offspring populations of the N network chromosome individuals by adopting an elite selection strategy, and are repeatedly crossed and mutated to obtain N optimized offspring populations, and the optimal network chromosomes are selected as the evolutionary neural network.

Description

Construction method of evolutionary neural network and hyperspectral image denoising method based on evolutionary neural network
Technical Field
The invention relates to the technical field of hyperspectral image denoising, in particular to a construction method of an evolutionary neural network and a hyperspectral image denoising method based on the evolutionary neural network.
Background
Hyperspectral image (hyperspectral image) belongs to a kind of remote sensing image. Unlike natural images, the method adds one-dimensional spectral information on the basis of two-dimensional spatial information. It is well known that images at a particular wavelength clearly exhibit corresponding geographic information due to the different absorption of the spectrum by the different components, for example: lakes, forests, deserts, buildings, etc. In this way, such a single three-dimensional image can reflect not only the information such as the topography and the feature of the two-dimensional natural image, but also the third-dimensional spectral information includes more geographical information. Therefore, the hyperspectral image is widely applied to the aspects of agriculture, forestry, geological exploration, environmental monitoring, urban planning and the like.
However, the space imaging spectrometer in the prior art records data by acquiring solar radiation signals of ground feature information, and is affected by various factors in the radiation process, and meanwhile, errors of various elements, influences of current and the like exist in the spectrometer. Therefore, various mixed noises are inevitably introduced into the imaged image. These mixed noises are generally: gaussian noise, bandwith noise, poisson noise, etc. The noise can have great influence on subsequent cutting, classification and other work of the hyperspectral image, and the denoising becomes a necessary step of preprocessing.
At present, the hyperspectral image denoising algorithms in the prior art are mainly classified into three categories:
the first type is a filtering-based method, and because the hyperspectral image only has one-dimensional spectral information more than that of a traditional natural image, the image of each layer can be regarded as a natural image, so that the existing excellent method for denoising the natural image can be directly applied to denoising the hyperspectral image. Most of the methods use a spatial domain transformation method or a transformation domain processing method, and some methods perform collaborative filtering (BM3D) even considering three-dimensional block matching and achieve very good effect. One obvious disadvantage is that these methods are sensitive to the way they are changed, many of which are dependent on manual settings. And most of them ignore spatial information and spectral information, which results in that the denoised image loses many key information of the hyperspectral image.
The second type is an optimization-based method, which uses some methods such as Total Variation (Total Variation), Non-local (Non-local), Sparse Representation (Sparse Representation), and Low Rank (Low Rank) according to the prior knowledge or assumption. The method utilizes the known characteristics in the hyperspectral image to denoise the hyperspectral image, combines the spatial information and the spectral information of the hyperspectral image well, can obtain the denoised image without damaging the characteristics of the hyperspectral image, and has good performance on the denoising effect. But this class of methods is less adaptive to a variety of mixed noise.
The third category is a deep learning-based method, which generally builds a neural network, and the neural network is used for denoising hyperspectral images after being trained. This method tends to perform well on denoising mixed noise, over the second method, because it does not need to consider the distribution characteristics of noise. However, firstly, a neural network needs to be designed manually, an expert who knows both the neural network and the denoising of the hyperspectral image needs to determine the structure of the neural network and initialize the parameters of the neural network, and the artificially designed neural network is generally found to be very long, so that a large amount of computing resources are consumed when the neural network is trained and utilized for denoising.
Therefore, a construction method of an evolutionary neural network with simple logic, convenient construction and computational resource saving and a hyperspectral image denoising method based on the evolutionary neural network are urgently needed to be provided.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a method for constructing an evolved neural network and a hyperspectral image denoising method based on the evolved neural network, and the technical scheme adopted by the present invention is as follows:
a method for constructing an evolutionary neural network comprises the following steps:
packaging the convolution layer, the reflection filling layer and the batch normalization layer into blocks, and constructing and initializing to obtain network chromosome populations of N network chromosomes; n is a natural number more than or equal to 5;
calculating and obtaining the mean square error of any network chromosome in the network chromosome population, and presetting the threshold value of the mean square error;
selecting two parent network chromosome individuals by adopting a twice relaxation binary competition selection algorithm;
aligning and crossing the two parent network chromosome individuals to obtain two offspring network chromosomes;
selecting any one of the network chromosomes of the offspring for mutation until offspring populations of N network chromosome individuals are obtained;
adopting an elite selection strategy to select (N-x) network chromosome individuals from the network chromosome populations of the N network chromosomes and the offspring populations of the N network chromosome individuals, and repeatedly crossing and mutating to obtain N optimized offspring populations, and selecting the optimal network chromosome as an evolutionary neural network; and x is a natural number which is more than equal 1 and less than N.
Further, the method for packaging the network chromosome population into blocks by utilizing the convolution layer, the reflection filling layer and the batch normalization layer, and constructing and initializing the network chromosome population to obtain the N network chromosomes comprises the following steps:
packaging the convolution layer, the reflection filling layer and the batch normalization layer into a single block from front to back in sequence, copying and sequentially arranging the packaged single blocks, and connecting adjacent single blocks by adopting a ReLU activation function; adding a convolution layer behind the single block at the tail end to obtain a plurality of first network chromosomes with different lengths;
packaging the convolution layer and the batch normalization layer into a single block from front to back, copying and sequentially arranging the packaged single blocks, and connecting adjacent single blocks by adopting a ReLU activation function; adding a convolution layer behind the single block at the tail end to obtain a plurality of second network chromosomes with different lengths;
initializing the convolution layer and the batch normalization layer of the first network chromosome and the second network chromosome respectively; and obtaining the network chromosome population of N network chromosomes.
Further, the core size of the single block of the second network chromosome is 1; a core size of a single block of the first network chromosome is k; k is a natural number greater than 1; and the parameter of the reflection filling layer within a single block of the first network chromosome is (k-1)/2.
Further, initializing the convolution layer and the batch normalization layer includes:
presetting and coding the convolution kernel width, the convolution kernel height, the number of characteristic images and the mean value and the variance of convolution kernel elements of the convolution layer;
and presetting and coding the mean and variance of the neurons of the batch normalization layer.
Further, the convolution layers at the ends of the first network chromosome and the second network chromosome are the same in structure and have fixed core size and fixed output characteristic images.
Further, any one of the parent network chromosome individuals is selected by adopting a relaxed binary competition selection algorithm, and the method comprises the following steps of:
step S51, randomly selecting two network chromosome individuals from the network chromosome population, and obtaining the absolute value of the difference value of the mean square errors of the two network chromosome individuals;
step S52, judging the absolute value of the difference value of the mean square errors of the two network chromosome individuals and the threshold value of the preset mean square error;
if the absolute value is larger than a preset threshold value of the mean square error, selecting a network chromosome with a smaller mean square error;
if the absolute value is smaller than a preset threshold value of mean square error, selecting network chromosome individuals with lower complexity;
if the absolute value is smaller than a preset threshold value of the mean square error and the complexity is the same, selecting a network chromosome individual with a smaller mean square error;
in step S53, the network chromosome individual selected in step S52 is used as a parent network chromosome individual.
Preferably, the aligning and crossing operations on the two parent network chromosome individuals to obtain two child network chromosomes comprise the following steps:
extracting two parent network chromosome individuals, and corresponding the left blocks of the two parent network chromosome individuals one by one;
and (5) adopting interval cross substitution to obtain two offspring network chromosomes.
Furthermore, the method for selecting any one of the network chromosomes of the offspring to perform variation until obtaining the offspring population of the N network chromosome individuals comprises the following steps:
selecting any offspring network chromosome, and carrying out polynomial variation on the coded information to carry out real value variation;
and randomly increasing or decreasing the number of network layers to obtain N sub-generation populations.
Preferably, the method for constructing the evolutionary neural network further comprises randomly increasing or decreasing the number of network layers for the offspring network chromosomes with polynomial variation; the added network is one of a convolutional layer, a reflective padding layer, and a batch normalization layer.
Preferably, the increased network type fraction is 1/3.
Further, the method for constructing the evolutionary neural network further comprises moving the added network to the last layer of the offspring network chromosome.
Further, the (N-x) network chromosome individuals are selected from the network chromosome population of the N network chromosomes and the offspring population of the N network chromosome individuals by adopting an elite selection strategy, and the method comprises the following steps:
selecting x network chromosome individuals from the network chromosome population of the N network chromosomes and the offspring population of the N network chromosome individuals according to the mean square error;
and (N-x) network chromosome individuals are selected from the remaining (2N-x) network chromosome individuals by utilizing a relaxed binary competition algorithm.
Further, the method for constructing the evolutionary neural network further comprises the steps of selecting two network chromosome individuals from the selected (N-x) network chromosome individuals by adopting a two-time relaxation binary competition selection algorithm, and performing alignment, crossing and mutation operations until N optimized offspring populations are obtained.
The hyperspectral image denoising method based on the evolutionary neural network comprises the following steps:
segmenting the hyperspectral image into m × m slices, and dividing the slices into a training set and a test set; m is a natural number greater than 1;
learning the evolutionary neural network by adopting a training set, optimizing by using an Adam algorithm, and repeating training for a plurality of times to obtain a convergent evolutionary neural network;
and inputting the images of the test set into the evolutionary neural network, and outputting the denoised images.
Further, the hyperspectral image denoising method based on the evolutionary neural network further comprises the step of adding Gaussian noise with the noise intensity of 0.003dB to the hyperspectral image.
Preferably, the ratio of the training set to the test set is 4: 1.
Compared with the prior art, the invention has the following beneficial effects:
(1) the batch normalization layer is skillfully arranged, so that the convergence of a neural network can be accelerated, and the problems of gradient disappearance and gradient explosion can be solved in the hyperspectral image denoising;
(2) according to the invention, the reflection filling layer is skillfully introduced, so that the denoising and reducing effect of the edge of the hyperspectral image can be ensured, and the output size of the image can be ensured; a reflection filling layer is added into the neural network instead of zero padding to keep the size of an output image, so that the information in the image is better utilized to protect the denoising effect on the boundary;
(3) according to the invention, a relaxed binary competition selection algorithm is skillfully adopted to select excellent individuals, a network with excellent hyperspectral image denoising effect and low complexity can be selected, and the denoising effect is dominant;
(4) the invention provides a coding construction of a neural network, so that the coded neural network can be well suitable for an evolution algorithm, a network suitable for denoising a hyperspectral image is evolved, and thus, the neural network suitable for denoising the hyperspectral image can be designed;
(5) the invention provides a genetic operation to generate filial generation, which can be well applied to the variable length coding mode provided by us, thus bringing about the following advantages: a suitable network can be searched in a wider range.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of protection, and it is obvious for those skilled in the art that other related drawings can be obtained according to these drawings without inventive efforts.
FIG. 1 is a schematic diagram of the network chromosome encapsulation of the present invention.
FIG. 2 is a schematic diagram of the generation of offspring from the web chromosome of the present invention.
FIG. 3 is a schematic diagram illustrating denoising and comparing hyperspectral images.
FIG. 4 is a diagram of denoising training according to the present invention.
Detailed Description
To further clarify the objects, technical solutions and advantages of the present application, the present invention will be further described with reference to the accompanying drawings and examples, and embodiments of the present invention include, but are not limited to, the following examples. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Examples
As shown in fig. 1 to 4, the present embodiment provides a hyperspectral image denoising method based on an evolved neural network, and in the present embodiment, the evolved neural network is a brand new method, so that a method for constructing the evolved neural network of the present embodiment is explained first, including the following steps:
the method comprises the steps that firstly, a convolution layer, a reflection filling layer and a batch normalization layer are packaged into a single block from front to back in sequence, a plurality of packaged single blocks are copied and sequentially arranged, and a ReLU activation function is adopted to connect adjacent single blocks; and a convolution layer is added after the single block at the tail end, so that a plurality of first network chromosomes with different lengths are obtained.
Secondly, packaging the convolution layer and the batch normalization layer into a single block from front to back, copying and sequentially arranging the packaged single blocks, and connecting adjacent single blocks by adopting a ReLU activation function; and a convolution layer is added after the single block at the tail end, so that a plurality of second network chromosomes with different lengths are obtained. As shown in fig. 1, in this embodiment, the three layers are packed into one block, their order being convolutional layer- > reflective padding layer- > bulk normalization layer- > ReLU activation function. Fig. one shows two different cases of blocks, one with a convolutional layer core size of 1, with the reflective fill layer parameter set to 0, which is equivalent to no such layer (second row). One is that the kernel size k is greater than 1 (shown in the first row), where the reflective shim has a parameter of (k-1)/2, and is restored to the size before convolution immediately after the convolution layer. The lower two rows of diagram one represent neural networks in two different combinations.
Respectively initializing the convolution layer and the batch normalization layer of the first network chromosome and the second network chromosome; obtaining N network chromosome populations as shown in Table I:
Figure BDA0002371480740000071
the population initialization steps are as follows:
firstly, randomly selecting the length of an individual chromosome in a given range, namely the number of blocks;
randomly initializing the value of the table coding information in each block which is randomly taken out;
and adding a convolution layer with fixed core size and output characteristic image at the final end to ensure the consistency of all network outputs.
Fourthly, calculating and obtaining the mean square error of any network chromosome in the network chromosome population, and presetting a threshold value of the mean square error; the mean square error is obtained by calculating a noise-free hyperspectral image and a noisy hyperspectral image, namely the difference between the image of the network chromosome individual after being denoised and the original image
Fifthly, selecting two parent network chromosome individuals by adopting a twice relaxed binary competition selection algorithm, and selecting one parent network chromosome individual by only one time relaxed binary competition selection algorithm, which comprises the following steps:
(1) randomly selecting two network chromosome individuals from the network chromosome population, and obtaining the absolute value of the difference value of the mean square errors of the two network chromosome individuals;
(2) judging the absolute value of the difference value of the mean square errors of the two network chromosome individuals and the threshold value of the preset mean square error;
if the absolute value is larger than a preset threshold value of the mean square error, selecting a network chromosome with a smaller mean square error;
if the absolute value is smaller than a preset threshold value of mean square error, selecting network chromosome individuals with lower complexity;
if the absolute value is smaller than a preset threshold value of the mean square error and the complexity is the same, selecting a network chromosome individual with a smaller mean square error;
(3) and (3) taking the network chromosome individual selected in the step (2) as a parent network chromosome individual.
The above steps can be colloquially said to be that firstly, two individuals are randomly selected from a population, the proportion that a larger mean square error is smaller than the other one is calculated, and as the smaller the mean square error is, the better the larger the mean square error is, if the proportion is higher than a specified threshold value, the smaller the mean square error is, the better the larger the mean square error is, the smaller the mean square error is, so that the smaller the mean square error is, the smaller the. And in addition, a general simple difference making method is not adopted, because the mean square error values of all individuals are large in the early stage of the evolution algorithm, and most individuals have good denoising performance when entering the later stage of the evolution algorithm, namely the mean square error is generally small, so that the consistency of selection operation is better maintained by using the proportion.
If the calculated ratio does not exceed the specified threshold, the two individuals are considered to have almost the same denoising effect, and the complexity of the two individuals is compared. The complexity here we derive from calculating the number of hidden neurons in an individual. And comparing the complexity of the two individuals after simply making a difference with a threshold value to judge whether the complexity of one individual is absolutely dominant, if so, selecting the individual with the dominant complexity, if not, considering the complexity to be equal, and returning to the selection of the individual with a small mean square error.
Sixthly, aligning and crossing the two parent network chromosome individuals to obtain two offspring network chromosomes, and the method comprises the following steps:
(1) extracting two parent network chromosome individuals, and corresponding the left blocks of the two parent network chromosome individuals one by one;
(2) and (5) adopting interval cross substitution to obtain two offspring network chromosomes.
The process is shown in fig. 2, and two broad categories of situations may occur in the cross mutation operation: 1. the two individuals are different in length; 2. the lengths are the same. The interleaving operation of the case of the same length only needs to perform analog binary interleaving (SBX) on all corresponding blocks of the two individuals and corresponding coded information between the blocks. The length difference is relatively complicated, as shown in fig. two. We divide the crossover operation into two small steps, 1. align the blocks and cross.
Storing the crossed filial individuals: it can be seen that in fig. 2, first there are two individuals of length 4, 5 (we see the last individual convolutional layer as a block), and first we align the two individuals according to the front end, so that the last block of the individual of length 5 does not participate in the interleaving operation, and the fourth batch normalization layer does not have a corresponding layer to interleave with it. After we have crossed the previously aligned 4 blocks, we have obtained two newly generated children as shown in fig. two (b), whose front aligned blocks have both their coded information emulated binary cross-over. It is noted that during such interleaving, a block without a reflective filler layer will result in an RP layer that owns the block of this layer.
Seventhly, selecting any one filial network chromosome to carry out variation until obtaining the filial population of N network chromosome individuals, and comprising the following steps:
(1) selecting any offspring network chromosome, and carrying out polynomial variation on the coded information to carry out real value variation;
(2) and randomly increasing or decreasing the number of network layers to obtain N sub-generation populations. This is by randomly selecting the type of operation: adding a block for random initialization, randomly removing blocks except the last layer or keeping the length constant, and randomly taking 1/3 the probability of each type of operation.
Eighthly, selecting (N-x) network chromosome individuals from the network chromosome populations of the N network chromosomes and the offspring populations of the N network chromosome individuals by adopting an elite selection strategy, repeatedly crossing and mutating to obtain N optimized offspring populations, and selecting the optimal network chromosome as an evolutionary neural network; the method comprises the following steps:
(1) selecting x network chromosome individuals from the network chromosome population of the N network chromosomes and the offspring population of the N network chromosome individuals according to the mean square error;
(2) and (N-x) network chromosome individuals are selected from the remaining (2N-x) network chromosome individuals by utilizing a relaxed binary competition algorithm.
(3) And selecting two network chromosome individuals from the selected (N-x) network chromosome individuals by adopting a twice relaxation binary competition selection algorithm, and performing alignment, crossing and mutation operations until N optimized offspring populations are obtained.
And starting a new round of evolution after the next generation population is selected until the specified maximum generation number is reached. Therefore, the optimal individual can be selected from the last generation to serve as the network which is selected by the evolutionary neural network and is most suitable for denoising the hyperspectral image, and final training is waited. Thus, the construction of the evolutionary neural network can be completed.
In the present embodiment, the created population is continuously updated in the process of evolution, and individuals with excellent denoising performance (i.e. small mean square error) are continuously selected to become the next-generation evolved population. The mean square error of the individuals in the population is continuously reduced, the denoising effect of the individuals in the population is continuously improved, and finally the individual with the best denoising effect can be selected from the finally evolved population for denoising.
The hyperspectral image denoising method based on the evolutionary neural network is briefly set forth below, and comprises the following steps:
first, a hyperspectral image, Indian Pines, was previously collected, which is a landscape of indiana, usa taken by an AVIRIS sensor. The Gaussian noise with the noise intensity of 0.003 is added to the hyperspectral image, and then the hyperspectral image is cut into small slices of 30 x 30 every 10 pixels on a large image to obtain 2 thousands of small hyperspectral images. The resulting small sections were divided into a training set and a test set, with the test set accounting for 20%. Then, 20% of the data in the training set is separated as the validation set. It should be noted that the validation set of data only serves as validation when selecting the evolved CNN, while the test set serves as test in the final comparison of experimental results.
As shown in fig. 4, the evolutionary neural network consists of many hidden neurons (small circles in the figure). I gives a noise image as an input, the input image passes through each layer in the neural network, each layer in the middle is equivalent to the characteristic of the extracted image, and the noise is removed according to the characteristic, and finally the output image is obtained. But the neural network is used for denoising, the selected network is trained firstly;
then, learning the evolutionary neural network by adopting a training set, and optimizing by using an Adam algorithm to enable the output image to be closer to the original image; after a plurality of times of training, the network reaches convergence, namely the training of the neural network is realized;
and finally, inputting the images of the test set into the evolutionary neural network, and outputting the denoised images.
In order to verify the denoising effect of the evolutionary neural network on the hyperspectral image, the following comparative tests are carried out:
in this embodiment, in the denoising problem of the hyperspectral image, the PSNR of the denoised image is used as the most important index for judging whether the denoising effect is good or bad. The expression is as follows:
Figure BDA0002371480740000111
wherein, MSE is the mean square error of the denoised image and the original image;
in a hyperspectral image, the computation formula of MSE is:
MSE=||D-G||2
wherein D represents the denoised image, C represents the original clean image, and the double vertical lines represent the two-norm matrix.
The MSE represents the error between the denoised image D and the original clean image C, and when the MSE is smaller, the PSNR is larger, and the denoising effect is better. Therefore, the value of the MSE can be directly calculated to serve as the fitness in the evolution algorithm, and the smaller the value of the corresponding fitness of the MSE is, the higher the value of the corresponding fitness is. Therefore, evolving neural networks are well suited and convenient to select excellent individuals. Furthermore, a network suitable for denoising the hyperspectral image can be well searched.
In the embodiment, a relatively preferable hyperspectral image denoising algorithm in the market is adopted as comparison, and a 16-layer artificial convolutional neural network is built as our comparison algorithm by simulating a network architecture which is excellent in denoising of the hyperspectral image according to a deep learning method. Of which we are more concerned about the performance of the artificially designed neural network and the neural network automatically selected by the present invention. In an artificially designed network, each layer consists of convolution, batch normalization and a ReLU activation function, the size of a convolution core is 3, and the size of an output is maintained by each layer in a 0 complementing mode. If the number of channels of the input image is denoted as n, the number of output characteristic images of each layer is n, n, n, n, n x 2, n x 4, n x 2, n, n, n. We use the convolutional layer alone as the last layer and cancel the complement 0 to ensure that the output image size is equal to the automatically selected network.
In the embodiment, four indexes of MPSNR, MSSIM, MFSIM and MERGA are adopted to judge the quality of the denoising effect of the hyperspectral image; it is known that the larger the values of MPSNR, MSSIM and MFSIM are, the closer to the original image is, and the smaller the MERGA is, the better the denoising effect is.
The following table shows comparison of numeric indexes of value-taking effect of different algorithms in the 187 th wave band:
method of producing a composite material MPSNR MSSIM MFSIM MERGA
Noisy image 50.311 0.98997 0.96443 29.857
BM3D 62.881 0.99905 0.97201 7.567
LRTA 64.939 0.99937 0.99030 5.877
LRTV 64.852 0.99954 0.98072 5.929
TDL 67.248 0.99967 0.98883 4.483
BM4D 65.122 0.99951 0.98107 5.575
ITSReg 66.979 0.99956 0.98829 4.906
LRMR 65.115 0.99958 0.98599 5.333
Artificial-CNN 69.317 0.99974 0.99197 4.095
Evolve-CNN 70.051 0.99977 0.99213 3.976
Through comparison, the denoising effect of the evolutionary neural network on the hyperspectral image is obviously superior to that of BM3D, LRTA, LRTV, TDL, BM4D, ITSReg, LRMR and Artificial-CNN; compared with the prior art, the method has the outstanding substantive characteristics and remarkable progress, and has high practical value and popularization value in the technical field of hyperspectral image denoising.
The above-mentioned embodiments are only preferred embodiments of the present invention, and do not limit the scope of the present invention, but all the modifications made by the principles of the present invention and the non-inventive efforts based on the above-mentioned embodiments shall fall within the scope of the present invention.

Claims (16)

1. A method for constructing an evolutionary neural network is characterized by comprising the following steps:
packaging the convolution layer, the reflection filling layer and the batch normalization layer into blocks, and constructing and initializing to obtain network chromosome populations of N network chromosomes; n is a natural number more than or equal to 5;
calculating and obtaining the mean square error of any network chromosome in the network chromosome population, and presetting the threshold value of the mean square error;
selecting two parent network chromosome individuals by adopting a twice relaxation binary competition selection algorithm;
aligning and crossing the two parent network chromosome individuals to obtain two offspring network chromosomes;
selecting any one of the network chromosomes of the offspring for mutation until offspring populations of N network chromosome individuals are obtained;
adopting an elite selection strategy to select (N-x) network chromosome individuals from the network chromosome populations of the N network chromosomes and the offspring populations of the N network chromosome individuals, and repeatedly crossing and mutating to obtain N optimized offspring populations, and selecting the optimal network chromosome as an evolutionary neural network; and x is a natural number which is more than equal 1 and less than N.
2. The method for constructing the evolutionary neural network of claim 1, wherein the network chromosome population of N network chromosomes is obtained by encapsulating the convolutional layer, the reflection filling layer and the batch normalization layer into a block, and constructing and initializing the block, and comprises the following steps:
packaging the convolution layer, the reflection filling layer and the batch normalization layer into a single block from front to back in sequence, copying and sequentially arranging the packaged single blocks, and connecting adjacent single blocks by adopting a ReLU activation function; adding a convolution layer behind the single block at the tail end to obtain a plurality of first network chromosomes with different lengths;
packaging the convolution layer and the batch normalization layer into a single block from front to back, copying and sequentially arranging the packaged single blocks, and connecting adjacent single blocks by adopting a ReLU activation function; adding a convolution layer behind the single block at the tail end to obtain a plurality of second network chromosomes with different lengths;
initializing the convolution layer and the batch normalization layer of the first network chromosome and the second network chromosome respectively; and obtaining the network chromosome population of N network chromosomes.
3. The method of claim 2, wherein the core size of the monolithic block of the second network chromosome is 1; a core size of a single block of the first network chromosome is k; k is a natural number greater than 1; and the parameter of the reflection filling layer within a single block of the first network chromosome is (k-1)/2.
4. The method of claim 2, wherein initializing the convolutional layer and the batch normalization layer comprises:
presetting and coding the convolution kernel width, the convolution kernel height, the number of characteristic images and the mean value and the variance of convolution kernel elements of the convolution layer;
and presetting and coding the mean and variance of the neurons of the batch normalization layer.
5. The method of claim 2, wherein the convolution layers at the ends of the first network chromosome and the second network chromosome are identical in structure and have a fixed core size and a fixed output feature image.
6. The method for constructing an evolutionary neural network as claimed in claim 1, wherein any one of the parent network chromosome individuals is selected by using a relaxed binary competition selection algorithm, comprising the following steps:
step S51, randomly selecting two network chromosome individuals from the network chromosome population, and obtaining the absolute value of the difference value of the mean square errors of the two network chromosome individuals;
step S52, judging the absolute value of the difference value of the mean square errors of the two network chromosome individuals and the threshold value of the preset mean square error;
if the absolute value is larger than a preset threshold value of the mean square error, selecting a network chromosome with a smaller mean square error;
if the absolute value is smaller than a preset threshold value of mean square error, selecting network chromosome individuals with lower complexity;
if the absolute value is smaller than a preset threshold value of the mean square error and the complexity is the same, selecting a network chromosome individual with a smaller mean square error;
in step S53, the network chromosome individual selected in step S52 is used as a parent network chromosome individual.
7. The method for constructing an evolutionary neural network as claimed in claim 1, wherein the aligning and crossing operation is performed on the two parent network chromosome individuals to obtain two child network chromosomes, and comprises the following steps:
extracting two parent network chromosome individuals, and corresponding the left blocks of the two parent network chromosome individuals one by one;
and (5) adopting interval cross substitution to obtain two offspring network chromosomes.
8. The method for constructing the evolutionary neural network as claimed in claim 4, wherein the step of selecting any one of the network chromosomes of the offspring for mutation until obtaining the offspring population of the N network chromosome individuals comprises the following steps:
selecting any offspring network chromosome, and carrying out polynomial variation on the coded information to carry out real value variation;
and randomly increasing or decreasing the number of network layers to obtain N sub-generation populations.
9. The method for constructing an evolutionary neural network as claimed in claim 8, further comprising randomly increasing or decreasing the number of network layers for the polynomial variant offspring network chromosomes; the added network is one of a convolutional layer, a reflective padding layer, and a batch normalization layer.
10. The method of claim 9, wherein the ratio of the added network types is 1/3.
11. The method of claim 9, further comprising moving the added net to a last layer of a descendant net chromosome.
12. The method for constructing an evolutionary neural network as claimed in claim 1, wherein the step of selecting (N-x) network chromosome individuals from the network chromosome population of N network chromosomes and the offspring population of N network chromosome individuals by using an elite selection strategy comprises the following steps:
selecting x network chromosome individuals from the network chromosome population of the N network chromosomes and the offspring population of the N network chromosome individuals according to the mean square error;
and (N-x) network chromosome individuals are selected from the remaining (2N-x) network chromosome individuals by utilizing a relaxed binary competition algorithm.
13. The method of claim 1, further comprising selecting two cyber chromosome individuals from the selected (N-x) cyber chromosome individuals by a two-time relaxed binary competition selection algorithm, and performing alignment, crossover and mutation operations until N optimized progeny populations are obtained.
14. The hyperspectral image denoising method based on the evolutionary neural network is characterized by comprising the following steps of:
segmenting the hyperspectral image into m × m slices, and dividing the slices into a training set and a test set; m is a natural number greater than 1;
learning the evolutionary neural network by adopting a training set, optimizing by using an Adam algorithm, and repeating training for a plurality of times to obtain a convergent evolutionary neural network;
and inputting the images of the test set into the evolutionary neural network, and outputting the denoised images.
15. The method for denoising the hyperspectral image based on the evolutionary neural network of claim 14, further comprising adding gaussian noise with a noise intensity of 0.003dB to the hyperspectral image.
16. The hyperspectral image denoising method based on the evolutionary neural network of claim 14, wherein the ratio of the training set to the test set is 4: 1.
CN202010051941.3A 2020-01-17 2020-01-17 Construction method of evolutionary neural network and hyperspectral image denoising method based on evolutionary neural network Withdrawn CN111292260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010051941.3A CN111292260A (en) 2020-01-17 2020-01-17 Construction method of evolutionary neural network and hyperspectral image denoising method based on evolutionary neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010051941.3A CN111292260A (en) 2020-01-17 2020-01-17 Construction method of evolutionary neural network and hyperspectral image denoising method based on evolutionary neural network

Publications (1)

Publication Number Publication Date
CN111292260A true CN111292260A (en) 2020-06-16

Family

ID=71022287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010051941.3A Withdrawn CN111292260A (en) 2020-01-17 2020-01-17 Construction method of evolutionary neural network and hyperspectral image denoising method based on evolutionary neural network

Country Status (1)

Country Link
CN (1) CN111292260A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113687654A (en) * 2021-08-24 2021-11-23 迪比(重庆)智能科技研究院有限公司 Neural network training method and path planning method based on evolutionary algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112263A (en) * 2014-06-28 2014-10-22 南京理工大学 Method for fusing full-color image and multispectral image based on deep neural network
CN106408522A (en) * 2016-06-27 2017-02-15 深圳市未来媒体技术研究院 Image de-noising method based on convolution pair neural network
US20190035118A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image conversion
CN110135498A (en) * 2019-05-17 2019-08-16 电子科技大学 Image identification method based on deep evolution neural network
CN110599409A (en) * 2019-08-01 2019-12-20 西安理工大学 Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112263A (en) * 2014-06-28 2014-10-22 南京理工大学 Method for fusing full-color image and multispectral image based on deep neural network
CN106408522A (en) * 2016-06-27 2017-02-15 深圳市未来媒体技术研究院 Image de-noising method based on convolution pair neural network
US20190035118A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image conversion
CN110135498A (en) * 2019-05-17 2019-08-16 电子科技大学 Image identification method based on deep evolution neural network
CN110599409A (en) * 2019-08-01 2019-12-20 西安理工大学 Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113687654A (en) * 2021-08-24 2021-11-23 迪比(重庆)智能科技研究院有限公司 Neural network training method and path planning method based on evolutionary algorithm

Similar Documents

Publication Publication Date Title
CN111814707B (en) Crop leaf area index inversion method and device
CN112232280B (en) Hyperspectral image classification method based on self-encoder and 3D depth residual error network
Gong et al. A low-rank tensor dictionary learning method for hyperspectral image denoising
CN109934761B (en) JPEG image steganalysis method based on convolutional neural network
CN111985543B (en) Construction method, classification method and system of hyperspectral image classification model
CN110852227A (en) Hyperspectral image deep learning classification method, device, equipment and storage medium
CN109785249A (en) A kind of Efficient image denoising method based on duration memory intensive network
CN111429349A (en) Hyperspectral image super-resolution method based on spectrum constraint countermeasure network
CN109087375B (en) Deep learning-based image cavity filling method
CN115170979B (en) Mining area fine land classification method based on multi-source data fusion
CN111507319A (en) Crop disease identification method based on deep fusion convolution network model
CN107609648B (en) Genetic algorithm combined with stacking noise reduction sparse automatic encoder
CN102609910A (en) Genetic evolution image rebuilding method based on Ridgelet redundant dictionary
CN112067129B (en) Hyperspectral processing method and waveband selection method
CN114494821A (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN112330952B (en) Traffic flow prediction method based on generating type countermeasure network
CN113837314A (en) Hyperspectral image classification method based on hybrid convolutional neural network
CN114357312A (en) Community discovery method and personality recommendation method based on automatic modeling of graph neural network
CN111292260A (en) Construction method of evolutionary neural network and hyperspectral image denoising method based on evolutionary neural network
CN116451553A (en) Improved variation modal decomposition and BiGRU fusion water quality prediction method
CN110660045B (en) Lymph node identification semi-supervision method based on convolutional neural network
CN111340133A (en) Image classification processing method based on deep convolutional neural network
CN113392871B (en) Polarized SAR (synthetic aperture radar) ground object classification method based on scattering mechanism multichannel expansion convolutional neural network
CN111930732B (en) Method and device for repairing missing power load data based on cascade convolution self-encoder
CN113537399A (en) Polarized SAR image classification method and system of multi-target evolutionary graph convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200616

WW01 Invention patent application withdrawn after publication