CN110163141B - Satellite image preprocessing method based on genetic algorithm - Google Patents

Satellite image preprocessing method based on genetic algorithm Download PDF

Info

Publication number
CN110163141B
CN110163141B CN201910407112.1A CN201910407112A CN110163141B CN 110163141 B CN110163141 B CN 110163141B CN 201910407112 A CN201910407112 A CN 201910407112A CN 110163141 B CN110163141 B CN 110163141B
Authority
CN
China
Prior art keywords
chromosome
data set
rgb
image
population
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910407112.1A
Other languages
Chinese (zh)
Other versions
CN110163141A (en
Inventor
焦李成
孙龙
李英萍
***
丁静怡
郭雨薇
杨淑媛
侯彪
尚荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910407112.1A priority Critical patent/CN110163141B/en
Publication of CN110163141A publication Critical patent/CN110163141A/en
Application granted granted Critical
Publication of CN110163141B publication Critical patent/CN110163141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Genetics & Genomics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a satellite image preprocessing method based on a genetic algorithm, which mainly solves the problem that the category balance can not be realized in the prior art, and the realization scheme is as follows: 1) Carrying out label denoising and data enhancement preprocessing on the satellite image; 2) Carrying out ground object type balance on the preprocessed satellite graph; 3) Fusing the satellite images after class balance to generate a training sample; 4) Training a semantic segmentation model by using a training sample; 5) Shadow position detection is carried out on the satellite image test sample set; 6) Fusing satellite image test set images, and detecting the images by using a semantic segmentation model; 7) And correcting the semantic segmentation result obtained by detection by using the pixel value of the shadow position. The method solves the problem of satellite image category balance, and guides the semantic segmentation result through the pixel value of the shadow position, so that the semantic segmentation precision of the satellite image is obviously improved, and the method can be used for data preprocessing of classification and segmentation tasks in deep learning.

Description

Satellite image preprocessing method based on genetic algorithm
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a satellite image preprocessing method based on a genetic algorithm, which can be used for preprocessing tasks of satellite image classification and segmentation in deep learning.
Background
With the increase of satellite image data, the application of deep learning in the field of satellite image processing is more and more extensive, and the performance of deep learning is also seriously influenced by the satellite image preprocessing.
Satellite image preprocessing generally includes aspects such as class balancing, data enhancement, and the like. Currently, the commonly used satellite image category balancing method includes: 1) The down sampling is carried out when the number of categories is large, and a part of satellite pictures are discarded randomly, so that the difference of the number of categories is balanced as much as possible; 2) The method comprises the steps of performing upsampling on a satellite picture with a small number of categories, performing operations such as turning, rotating, zooming, cutting, translating and Gaussian noise increasing on the satellite picture, adding the upsampling into a data set, and balancing the difference of the number of categories as much as possible; 3) And modifying the training weight in the model, setting a larger weight for the data with a smaller number of categories, and setting a smaller training weight for the data with a larger number of categories, so that the accuracy of the trained model integrally reaches a higher accuracy.
However, the above conventional satellite image data balancing methods cannot well balance the feature types in the satellite images. When the distribution of the ground feature types in the satellite picture does not follow a certain specific rule any more, and simple data enhancement methods such as picture turning, zooming, clipping, translation and the like are used, the number of ground features of a certain type is increased or decreased, and meanwhile, the number of ground features of another type is changed along with the increase or decrease, so that the effect of balancing the ground feature types cannot be achieved.
Patent publication No. CN102495901B proposes a method for realizing class data balance by local mean value keeping, which can balance the ground feature classes in the satellite images by a k-means algorithm. However, when the method is used for processing satellite images of a large scene, two ground objects with long distances cannot be well balanced, and a good balancing effect cannot be achieved.
Disclosure of Invention
The invention aims to provide a satellite image preprocessing method based on a genetic algorithm aiming at the defects of the prior art, so as to improve the balance effect of two ground objects which are far away in a large-scene satellite image, and perform channel fusion on images containing different channels to obtain images containing more information.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
(1) Reading a training data set in a satellite image, wherein the training data set comprises an RGB training data set and an eight-waveband multispectral MSI training data set, and carrying out the same label denoising and data enhancement preliminary processing on the images in the two data sets;
(2) Carrying out ground object class balance on the RGB training data set after the preliminary processing based on a genetic algorithm to obtain an RGB data set after class distribution balance, and selecting images with the same name as the balanced RGB data set from the eight-waveband multispectral MSI training data set to form a balanced eight-waveband multispectral MSI data set;
(3) Performing channel fusion on the RGB images in the balanced RGB data set and the corresponding eight-waveband multispectral MSI images in the balanced eight-waveband multispectral MSI data set to generate a training sample of a new channel;
(4) Sending the training sample into an existing image cascade network ICNet for training to obtain a trained semantic segmentation model;
(5) Reading a test data set in a satellite image, wherein the test data set comprises an RGB test data set and an eight-waveband multispectral MSI test data set, carrying out shadow position detection on the eight-waveband multispectral MSI test data set, and taking out the position of a shadow part;
(6) Fusing the corresponding images in the RGB test data set and the MSI test data set according to the method in the step (3) to generate a new test sample, and sending the new test sample into the semantic segmentation model obtained in the step (4) to obtain a semantic segmentation result;
(7) And (4) correcting the semantic segmentation result obtained in the step (6) by using the pixel value of the shadow position obtained in the step (5) to obtain a semantic segmentation result optimized by the satellite image, and finishing the preprocessing of the satellite image.
In summary, the advantages of the present invention are as follows:
firstly, the invention adopts a satellite image preprocessing method based on a genetic algorithm to denoise and enhance data of a training sample, utilizes the genetic algorithm to balance ground object types, can accurately and quickly balance the ground object types, and obtains a satellite image data set with balanced types and strong generalization capability.
Secondly, the invention utilizes the characteristics of the satellite images, including RGB images and eight-waveband multispectral MSI images, to perform channel fusion on the satellite images in two different forms to obtain a new image with richer information, which is beneficial to improving the precision of semantic segmentation.
Thirdly, the invention extracts 3 channels of near infrared wave band, red edge wave band and yellow wave band by using the characteristics of the eight-wave band multispectral MSI image, respectively normalizes to form a visualized image, extracts a shadow position by using the pixel value of the visualized image, corrects the semantic segmentation result by using the pixel value of the shadow position, and improves the semantic segmentation precision.
Description of the drawings:
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a sub-flow diagram of balancing data classes using a genetic algorithm in the present invention;
FIG. 3 is a flow chart of the shadow position extraction sub-process in the present invention;
FIG. 4 is a sub-flow diagram of semantic segmentation result correction using shadow location pixels in accordance with the present invention.
Detailed description of the preferred embodiment
The invention is described in detail below with reference to the attached drawing figures:
referring to fig. 1, the implementation steps of the invention are as follows:
step 1: and (5) performing primary processing on the satellite image.
1.1 Read a training dataset in the satellite image, including an RGB training dataset and an eight-band multispectral MSI training dataset;
1.2 Deleting images containing a large number of unlabeled labels in the RGB training data set to obtain a denoised RGB training data set, and deleting images with the same name in the eight-waveband multispectral MSI training data set to obtain a denoised eight-waveband multispectral MSI training data set;
1.3 ) counting the number C [ m ] of pixels of each category in the denoised RGB training data set, and calculating the total C of the number of pixels of all categories in the RGB training data set:
Figure BDA0002061590200000031
wherein M is a category label, and M belongs to M, and M is the total category number of the satellite image;
1.4 Calculate the ratio of the number of individual class pixels C m to the sum of all class pixels C in the RGB training data set:
Figure BDA0002061590200000032
1.5 Pair of (c) satisfy
Figure BDA0002061590200000033
Firstly, carrying out horizontal or vertical turnover transformation, and then carrying out brightness or saturation transformation to obtain an RGB training data set after primary processing;
1.6 Carrying out 1.5) operation on the denoised eight-waveband multispectral MSI training data set to obtain an initially processed eight-waveband multispectral MSI training data set.
Step 2: and (4) carrying out ground object class balancing on the satellite images by using a genetic algorithm.
Referring to fig. 2, the specific implementation of this step is as follows:
2.1 Random initialization of a chromosome:
numbering the images in the RGB training data set after the preliminary treatment into 1-n according to the sequence, wherein the numbering represents the position of the image in each chromosome, the ith position of the chromosome is used for randomly generating '0' or '1', i belongs to [1,n ], and n is the number of the images in the RGB training data set after the preliminary treatment;
2.2 ) performing 2.1) f times in a loop to generate a population with the number f of chromosomes, wherein f is set to be 10-50;
2.3 Randomly selecting two chromosomes from a new generation population for crossing and correcting to obtain a crossed population:
2.3.1 ) random generationp c ∈[0,1],p c For the cross probability, if p c If the chromosome number is more than 0.6, randomly selecting a section of chromosome between two positions of one chromosome as an exchange part, and exchanging the section of chromosome with the chromosome at the same position of the other chromosome to obtain two crossed chromosomes;
2.3.2 ) selecting a crossed chromosome, and counting the number a of characters "1" therein 1 And recording the number of images in the balanced RGB data set as a according to a 1 The difference from a corrects the value on this chromosome and replaces the original chromosome in the population:
if a is 1 -a =0, then this chromosome is retained;
if a is 1 A > 0, then a is randomly chosen 1 -a positions where the characters are "1", modified to "0";
if a is 1 A is less than 0, then a-a is randomly selected 1 The position of each character is '0' and is modified into '1';
2.3.3 Step 2.3.2) is also performed on the other crossed chromosome;
2.3.4 Two modified chromosomes are used for replacing two chromosomes before crossing to obtain a crossed population.
2.4 Randomly selecting a chromosome from the crossed population for mutation and correction to obtain a mutated population:
2.4.1 ) randomly generating p m ∈[0,1],p m Is the probability of variation, if p m If the number of the chromosomes is more than 0.6, randomly selecting any position on one chromosome, if the position is 0, changing the position into 1, and if the position is 1, changing the position into 0, and obtaining a chromosome after mutation;
2.4.2 A) counting the number a of characters "1" in the mutated chromosome 3 Recording the number of images in the balanced RGB data set as a;
2.4.3 According to a) 3 And (b) correcting the mutated chromosome by the difference value of a:
if a is 3 A =1, randomly selecting the position of 1 character as "1" and modifying the position to "0";
if a is 1 -a = -1, then randomly selecting the position of 1 character "0" and modifying to "1";
2.4.4 Replacing the chromosome before mutation with the corrected chromosome to obtain a mutated population;
2.5 F individuals were selected from the mutated population to remain as offspring chromosomes by roulette:
2.5.1 C 'of each type of ground object category in each chromosome' j [m]And calculates the mean of all classes contained in the chromosome:
Figure BDA0002061590200000051
wherein the content of the first and second substances,
Figure BDA0002061590200000052
the mean value of all categories in the jth chromosome is obtained, M is a category label, and M is the total number of categories of the satellite image;
2.5.2 Calculate variance values var [ j ] for all classes in each chromosome:
Figure BDA0002061590200000053
2.5.3 Mapping variance to fitness in the genetic algorithm:
Figure BDA0002061590200000054
wherein, var [ j ] is the variance value of all categories in each chromosome, fitness [ j ] represents the fitness of the jth chromosome, max (var) represents the maximum value of the chromosome variance in each generation, and f is the population number;
2.5.4 Calculate the cumulative probability qs for each chromosome required in the selection strategy for roulette:
Figure BDA0002061590200000055
where p [ s ] is the probability of each chromosome being selected, expressed as follows:
Figure BDA0002061590200000056
2.5.5 Randomly generating a k E [0,1 ]]A number of
Figure BDA0002061590200000057
The s chromosome is selected to be kept;
2.5.6 2.5.5) are repeatedly executed for f times, and f chromosomes are selected to form a new generation of population;
2.6 Record the least square chromosome h' in the new generation population, update the optimal chromosome h:
Figure BDA0002061590200000061
wherein h ' represents the current optimum chromosome, h represents the chromosome with the minimum variance value remained after roulette, n represents the iteration number, var [ h ] represents the variance value of all categories in the chromosome h, and var [ h ' ] represents the variance value of all categories in the chromosome h ';
2.7 Step 2.3) -2.6) are repeatedly executed until the number of iterations reaches 10000, and then the process is stopped;
2.8 Output the optimal chromosome, and add the image corresponding to the position of the character "1" in the chromosome to the balanced RGB dataset.
And step 3: and fusing the two image channels to generate a training sample of a new channel.
Because the display effect of vegetation can be enhanced by the near infrared band (V-NIR) of the eight-band multispectral image (MSI), the RGB image and the eight-band multispectral MSI image are subjected to channel fusion to generate a new channel training sample, and the training sample contains more information.
The steps are specifically realized as follows:
3.1 The G channel of the RGB image and the near infrared band (V-NIR) of the eight-band multispectral image MSI are subjected to weighted fusion to obtain a new channel CH new
CH new =CH(G)×P+CH(V-NIR)×(1-P),
Wherein, P is weight, and 0.8 is taken;
3.2 Replace the original RGB image G channel with CH new And reserving the R channel and the B channel to generate a new image.
And 4, step 4: the image cascade network ICNet is trained using training samples.
In order to verify whether the satellite image preprocessing process has a role in improving semantic segmentation precision or not, a training sample is sent to the existing image cascade network ICNet for training to obtain a trained semantic segmentation model, and the implementation is as follows:
4.1 Randomly initializing internal weights of an image cascade network ICNet;
4.2 Sending the training samples into an image cascade network ICNet, and after all the training samples completely pass through the network, automatically modifying the internal weights of the image cascade network ICNet according to the loss values output at the end of the time;
4.3 4.2) is repeatedly executed until the floating range of the loss function value output by the image cascade network ICNet for 20 times does not exceed +/-0.5, and the semantic segmentation model taking the last weight as the result is saved.
And 5: the shadow position is detected and recorded.
Because the influence of the semantic segmentation result of the shadow part is serious, shadow position detection is carried out on the eight-waveband multispectral MSI image in the satellite image test set for guiding the semantic segmentation result of the test part.
Referring to fig. 3, the steps are specifically implemented as follows:
5.1 3 channels of near infrared wave band, red edge wave band and yellow wave band in the eight wave band multispectral MSI image are extracted;
5.2 Normalizing the value range of the near infrared band to [0,255] and rounding down to be the first channel of a new image;
5.3 Normalizing the value range of the red edge band to [0,255] and rounding down to place it as a second channel for the new image;
5.4 Normalizing the numerical range of the yellow band to [0,255] and rounding down to be set as a third channel of a new image;
5.5 Find and record the positions of the pixel values in the new image generated by the three channels in the range of (0,0,0) to (10,21,16), that is, the shadow positions.
And 6: and (4) testing the test sample by using the semantic segmentation model obtained in the step (4).
And (4) fusing the RGB images in the satellite image test set and the corresponding images in the eight-waveband multispectral MSI images according to the method in the step (3) to generate a new test sample, and sending the new test sample into the semantic segmentation model obtained in the step (4) to obtain a semantic segmentation result.
And 7: and correcting the semantic segmentation result by using the pixel value of the shadow position.
And (5) correcting the semantic segmentation result obtained in the step (6) by using the pixel value of the shadow position obtained in the step (5) to obtain a semantic segmentation result optimized by the satellite image, and finishing the preprocessing of the satellite image.
Referring to fig. 4, the specific implementation of this step is as follows:
7.1 Extracting the pixel value of the shadow position obtained in the step 5;
7.2 ) the semantic segmentation result obtained in step 6 is corrected according to the pixel values of the shadow positions:
if the pixel value of the shadow position is (0,255,64-100), correcting the corresponding position of the semantic segmentation result into a building;
if the pixel value of the shadow position is (0,255,127), correcting the corresponding position of the semantic segmentation result to be the ground;
if the pixel value of the shadow position is (0,0,127), the corresponding position of the semantic segmentation result is corrected to be a high vegetation.
The above is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A satellite image preprocessing method based on genetic algorithm is characterized by comprising the following steps:
(1) Reading a training data set in a satellite image, wherein the training data set comprises an RGB training data set and an eight-waveband multispectral MSI training data set, and carrying out the same label denoising and data enhancement preliminary processing on the images in the two data sets;
(2) Carrying out ground object class balance on the RGB training data set after the preliminary processing based on a genetic algorithm to obtain an RGB data set after class distribution balance, and selecting images with the same name as the balanced RGB data set from the eight-waveband multispectral MSI training data set to form a balanced eight-waveband multispectral MSI data set;
(3) Performing channel fusion on the RGB images in the balanced RGB dataset and the corresponding eight-waveband multispectral MSI images in the balanced eight-waveband multispectral MSI dataset to generate a training sample of a new channel; the method is realized as follows:
(3a) Carrying out weighted fusion on the G channel of the RGB image and the near infrared (V-NIR) band of the eight-band multispectral image MSI to obtain a new channel CH new
CH new =CH(G)×P+CH(V-NIR)×(1-P)
Wherein, P is weight, and 0.8 is taken;
(3b) Replacing original RGB image G channel with CH new Reserving an R channel and a B channel to generate a new image;
(4) Sending the training sample into the existing image cascade network ICNet for training to obtain a trained semantic segmentation model;
(5) Reading a test data set in a satellite image, wherein the test data set comprises an RGB (red, green and blue) test data set and an eight-waveband multispectral MSI test data set, detecting the shadow position of the eight-waveband multispectral MSI test data set, and taking out the position of a shadow part; the method is realized as follows:
(5a) Extracting 3 channels of a near infrared band, a red edge band and a yellow band in the eight-band multispectral MSI image;
(5b) Normalizing the numerical range of the near-infrared band to [0,255], and rounding down to be a first channel of a new image;
(5c) Normalizing the numerical range of the red edge wave band to [0,255], and rounding down to set the numerical range as a second channel of a new image;
(5d) Normalizing the numerical range of the yellow waveband to [0,255], and rounding down to be set as a third channel of a new image;
(5e) Finding and recording the positions of pixel values in the range from (0,0,0) to (10,21,16) in new images generated by the three channels, namely the positions of the pixels are shadow positions;
(6) Fusing the corresponding images in the RGB test data set and the MSI test data set according to the method in the step (3) to generate a new test sample, and sending the new test sample into the semantic segmentation model obtained in the step (4) to obtain a semantic segmentation result;
(7) Correcting the semantic segmentation result obtained in the step (6) by using the pixel value of the shadow position obtained in the step (5) to obtain a semantic segmentation result optimized by the satellite image, and finishing the preprocessing of the satellite image; the method is realized as follows:
(7a) Extracting the pixel value of the shadow position obtained in the step (5);
(7b) And (3) correcting (6) the obtained semantic segmentation result according to the pixel value of the shadow position:
if the pixel value of the shadow position is (0,255,64-100), correcting the corresponding position of the semantic segmentation result into a building;
if the pixel value of the shadow position is (0,255,127), correcting the corresponding position of the semantic segmentation result to be the ground;
if the pixel value of the shadow position is (0,0,127), the corresponding position of the semantic segmentation result is corrected to be a high vegetation.
2. The method of claim 1 in which the images of the RGB training dataset and the eight-band multispectral MSI training dataset are initially processed in (1) as follows:
(1a) Deleting data containing a large amount of unlabeled labels in the RGB training data set and the eight-waveband multispectral MSI training data set, and carrying out denoising operation on the data;
(1b) Counting the pixel number C [ m ] of each category in the RGB training data set, and calculating the sum C of the pixel number of all the categories in the RGB training data set:
Figure FDA0004047887030000021
wherein, M is a category label, M belongs to M, and M is the total number of categories of the satellite image;
(1c) Calculating the ratio of the number C [ m ] of each class pixel in the RGB training data set to the sum C of the number of all class pixels:
Figure FDA0004047887030000031
(1d) To satisfy
Figure FDA0004047887030000032
Firstly, the transformation of horizontal or vertical turning is carried out, and then the transformation of brightness or saturation is carried out, so as to enhance the data;
(1e) And (1 d) carrying out operation on the denoised eight-waveband multispectral MSI training data set in the same way to obtain a preliminarily processed eight-waveband multispectral MSI training data set.
3. The method according to claim 1, wherein the preliminary processed RGB training data set is balanced in feature class based on genetic algorithm in (2) as follows:
(2a) Randomly initializing one chromosome:
numbering the images in the RGB training data set after the preliminary treatment into 1-n according to the sequence, wherein the numbering represents the position of the image in each chromosome, the ith position of the chromosome is used for randomly generating '0' or '1', i belongs to [1,n ], and n is the number of the images in the RGB training data set after the preliminary treatment;
(2b) Circularly executing (2 a) f times to generate a population with the chromosome number f, wherein f is set to be 10-50;
(2c) Randomly selecting two chromosomes from the new generation of population for crossing and correcting to obtain a crossed population;
(2d) Randomly selecting a chromosome from the crossed population for mutation and correction to obtain a mutated population;
(2e) Selecting f individuals from the varied population in a roulette mode and reserving the f individuals as a new generation population;
(2f) Recording the chromosome h' with the minimum variance in the new generation population, and updating the optimal chromosome h:
Figure FDA0004047887030000033
wherein h ' represents the current optimum chromosome, h represents the chromosome with the minimum variance value remained after roulette, n represents the iteration number, var [ h ] represents the variance value of all categories in the chromosome h, and var [ h ' ] represents the variance value of all categories in the chromosome h ';
(2g) Repeating the steps (2 c) - (2 f) until the number of iterations reaches 10000, and stopping;
(2h) And outputting the optimal chromosome, and adding the image corresponding to the position of the character 1 in the chromosome into the balanced RGB data set.
4. The method of claim 3, wherein two chromosomes are randomly selected from the new generation population in (2 c) to be crossed and modified as follows:
(2c1) Random generation of p c ∈[0,1],p c For the cross probability, if p c If the chromosome length is more than 0.6, a section of chromosome between certain two positions of one chromosome is randomly selected as an exchange part and is exchanged with the chromosome at the same position of the other chromosome, so that two crossed chromosomes are obtained;
(2c2) SelectingSelecting a crossed chromosome, and counting the number a of characters '1' in the chromosome 1 And recording the number of the images in the balanced RGB data set as a according to a 1 The difference from a corrects the value on this chromosome and replaces the original chromosome in the population:
if a is 1 A =0, this chromosome is retained;
if a is 1 A is greater than 0, then a is randomly selected 1 -a positions where the characters are "1", modified to "0";
if a is 1 A is less than 0, then a-a is randomly selected 1 The position of each character is '0' and is modified into '1';
(2c3) Performing the step (2 c 2) on the other crossed chromosome;
(2c4) And replacing the two chromosomes before the crossing by using the two modified chromosomes to obtain a population after the crossing.
5. The method of claim 3, wherein in (2 d), a chromosome is randomly selected from the crossed population for mutation and correction, and the following is realized:
(2d1) Random generation of p m ∈[0,1],p m Is the probability of variation, if p m If the number of the chromosomes is more than 0.6, randomly selecting any position on one chromosome, if the position is 0, changing the position into 1, and if the position is 1, changing the position into 0, and obtaining a chromosome after mutation;
(2d2) Counting the number a of characters '1' in the chromosome after mutation 3 Recording the number of images in the balanced RGB data set as a;
(2d3) According to a 3 And (b) correcting the mutated chromosome by the difference value of a:
if a is 3 A =1, randomly selecting the position of 1 character as "1" and modifying the position to "0";
if a is 1 A = -1, randomly selecting a position with 1 character being '0', and modifying the position to be '1';
(2d4) And replacing the chromosome before mutation by using the corrected chromosome to obtain a mutated population.
6. The method of claim 3, wherein (2 e) f individuals are selected from the population after the variation by roulette to remain as offspring chromosomes, as follows:
(2e1) Counting the number c 'of various ground object classes in each chromosome' j [m]And calculates the mean of all the classes contained in the chromosome:
Figure FDA0004047887030000051
wherein the content of the first and second substances,
Figure FDA0004047887030000052
the mean value of all categories in the jth chromosome is obtained, M is a category label, and M is the total number of categories of the satellite image;
(2e2) Calculate the variance values var [ j ] for all classes in each chromosome:
Figure FDA0004047887030000053
(2e3) Mapping the variance to fitness in the genetic algorithm:
Figure FDA0004047887030000054
Figure FDA0004047887030000055
wherein, var [ j ] is the variance value of all categories in each chromosome, fitness [ j ] represents the fitness of the jth chromosome, max (var) represents the maximum value of the chromosome variance in each generation, and f is the population number;
(2e4) Calculating the cumulative probability of each chromosome required in the selection strategy for roulette:
Figure FDA0004047887030000056
where p [ s ] is the probability of each chromosome being selected, expressed as follows:
Figure FDA0004047887030000057
(2e5) Randomly generating a k epsilon [0,1 ∈]Number of (A) in
Figure FDA0004047887030000058
The s chromosome is selected to be kept;
(2e6) Repeating the step (2 e 5) for f times, and selecting f chromosomes to form a new generation of population.
7. The method of claim 1, wherein (4) the training samples are fed into an existing image cascade network ICNet for training as follows:
(4a) Randomly initializing internal weight of an image cascade network ICNet;
(4b) Sending the training samples into an image cascade network ICNet, wherein after all the training samples completely pass through the network, the image cascade network ICNet automatically modifies the internal weights thereof according to the loss values output at the end of the time;
(4c) And (4 b) repeating the step until the floating range of the loss function values output by the image cascade network ICNet for 20 times does not exceed +/-0.5, and storing the semantic segmentation model taking the last weight as a result.
CN201910407112.1A 2019-05-16 2019-05-16 Satellite image preprocessing method based on genetic algorithm Active CN110163141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910407112.1A CN110163141B (en) 2019-05-16 2019-05-16 Satellite image preprocessing method based on genetic algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910407112.1A CN110163141B (en) 2019-05-16 2019-05-16 Satellite image preprocessing method based on genetic algorithm

Publications (2)

Publication Number Publication Date
CN110163141A CN110163141A (en) 2019-08-23
CN110163141B true CN110163141B (en) 2023-04-07

Family

ID=67634728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910407112.1A Active CN110163141B (en) 2019-05-16 2019-05-16 Satellite image preprocessing method based on genetic algorithm

Country Status (1)

Country Link
CN (1) CN110163141B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215815A (en) * 2020-10-12 2021-01-12 杭州视在科技有限公司 Bare soil coverage automatic detection method for construction site

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8761506B1 (en) * 2011-04-22 2014-06-24 DigitalGlobe, Incorporated Pan sharpening digital imagery
CN107862667A (en) * 2017-11-23 2018-03-30 武汉大学 A kind of city shadow Detection and minimizing technology based on high-resolution remote sensing image
CN108632279A (en) * 2018-05-08 2018-10-09 北京理工大学 A kind of multilayer method for detecting abnormality based on network flow
CN109101943A (en) * 2018-08-27 2018-12-28 寿带鸟信息科技(苏州)有限公司 It is a kind of for detecting the machine vision method of Falls Among Old People

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101061491B (en) * 2004-11-19 2010-06-16 皇家飞利浦电子股份有限公司 A stratification method for overcoming unbalanced case numbers in computer-aided lung nodule false positive reduction
US8737733B1 (en) * 2011-04-22 2014-05-27 Digitalglobe, Inc. Hyperspherical pan sharpening

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8761506B1 (en) * 2011-04-22 2014-06-24 DigitalGlobe, Incorporated Pan sharpening digital imagery
CN107862667A (en) * 2017-11-23 2018-03-30 武汉大学 A kind of city shadow Detection and minimizing technology based on high-resolution remote sensing image
CN108632279A (en) * 2018-05-08 2018-10-09 北京理工大学 A kind of multilayer method for detecting abnormality based on network flow
CN109101943A (en) * 2018-08-27 2018-12-28 寿带鸟信息科技(苏州)有限公司 It is a kind of for detecting the machine vision method of Falls Among Old People

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于轻量和积网络及无人机遥感图像的大豆田杂草识别;王生生等;《农业工程学报》;20190323(第06期);全文 *
高分辨率遥感影像融合研究;韩闪闪等;《测绘科学》;20090920(第05期);全文 *

Also Published As

Publication number Publication date
CN110163141A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN111325203B (en) American license plate recognition method and system based on image correction
CN111353497B (en) Identification method and device for identity card information
CN112150493B (en) Semantic guidance-based screen area detection method in natural scene
CN110838131B (en) Method and device for realizing automatic cutout, electronic equipment and medium
CN114332650B (en) Remote sensing image road identification method and system
CN110008912B (en) Social platform matching method and system based on plant identification
CN114170608A (en) Super-resolution text image recognition method, device, equipment and storage medium
CN112580507A (en) Deep learning text character detection method based on image moment correction
CN110874835B (en) Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN110163141B (en) Satellite image preprocessing method based on genetic algorithm
CN114299379A (en) Shadow area vegetation coverage extraction method based on high dynamic image
CN110992301A (en) Gas contour identification method
CN111738964A (en) Image data enhancement method based on modeling
CN113361530A (en) Image semantic accurate segmentation and optimization method using interaction means
CN116129189A (en) Plant disease identification method, plant disease identification equipment, storage medium and plant disease identification device
CN116245855A (en) Crop variety identification method, device, equipment and storage medium
CN113255704B (en) Pixel difference convolution edge detection method based on local binary pattern
CN114219933A (en) Photographing question searching method
CN114462466A (en) Deep learning-oriented data depolarization method
CN113901916A (en) Visual optical flow feature-based facial fraud action identification method
CN114332637B (en) Remote sensing image water body extraction method and interaction method for remote sensing image water body extraction
CN116912918B (en) Face recognition method, device, equipment and computer readable storage medium
CN114863542B (en) Multi-mode-based juvenile recognition method and system
CN117011719B (en) Water resource information acquisition method based on satellite image
CN113255681B (en) Biological data character recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant