CN112149492A - Remote sensing image accurate cloud detection method based on reinforcement genetic learning - Google Patents

Remote sensing image accurate cloud detection method based on reinforcement genetic learning Download PDF

Info

Publication number
CN112149492A
CN112149492A CN202010642008.3A CN202010642008A CN112149492A CN 112149492 A CN112149492 A CN 112149492A CN 202010642008 A CN202010642008 A CN 202010642008A CN 112149492 A CN112149492 A CN 112149492A
Authority
CN
China
Prior art keywords
pixel
remote sensing
cloud
action
population
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010642008.3A
Other languages
Chinese (zh)
Other versions
CN112149492B (en
Inventor
郑红
李晓龙
韩传钊
郑文韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010642008.3A priority Critical patent/CN112149492B/en
Publication of CN112149492A publication Critical patent/CN112149492A/en
Application granted granted Critical
Publication of CN112149492B publication Critical patent/CN112149492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of remote sensing image processing, and provides a remote sensing image accurate cloud detection method based on reinforcement genetic learning. The method provided by the invention mainly comprises the following steps: inputting RGB remote sensing images, and preprocessing the remote sensing images, wherein noise in the remote sensing images is mainly filtered. Through analyzing the pixel environment state, the pixel environment state information required by the model is extracted, state-action strategy data is created, a reward and punishment strategy is introduced, and fitness score of each individual in the population is calculated after the population is initialized. Then, the genetic evolution part is followed, and the 'state-action' strategy data is continuously learned through iterative roulette, intersection and variation until the iteration ending condition is met, then the morphology is improved, and finally the detection result is output. According to the invention, reinforcement learning and a genetic algorithm are deeply combined, the environment simulation of reinforcement learning, the reward and punishment rule and the optimization strategy of the genetic algorithm are introduced into a cloud detection task, and a new way of a pixel-level cloud detection task is developed.

Description

Remote sensing image accurate cloud detection method based on reinforcement genetic learning
Technical Field
The invention belongs to the technical field of satellite remote sensing images, and particularly relates to a remote sensing image accurate cloud detection method based on reinforcement genetic learning.
Background
With the rapid development of remote sensing and space technologies, satellite remote sensing images are increasingly applied to various fields related to the national civilization with the advantages of large coverage area, strong time effectiveness, good data and geography comprehensiveness and the like. Global cloud data provided in terms of international satellite cloud climate planning flow data (ISCCP-FD) shows that over 66% of the earth's surface is often covered by clouds. The quality of the remote sensing image is influenced by the existence of a large amount of clouds in the optical remote sensing data, so that the data utilization rate of the image is reduced, the obtained surface feature information is attenuated and even lost, and great challenges are caused to subsequent identification, classification, interpretation and production of a space-time seamless product of a remote sensing image product.
Despite the great improvement and development of cloud detection technology, there are still some problems to be solved in some fields requiring accurate detection of cloud areas, such as change detection, ground target recognition, etc. Although multi-temporal algorithms may in most cases have a higher accuracy than single-date cloud detection algorithms, the necessary clear sky reference images or high-density time series data limit their application, whereas single-date cloud detection algorithms require a wider range of applications with their simple images, so we are only directed to single-date cloud detection algorithms here. Generally, for optical remote sensing images, the single-date cloud detection method mainly focuses on physical methods utilizing spectral reflection characteristics, texture statistics methods starting from remote sensing images, and machine learning.
The physical method mainly utilizes the difference of spectral reflection characteristics of the cloud and the ground object to realize cloud detection. On the visible light channel, the reflectivity of the cloud is higher than the reflectivity of the ground objects of the underlying surface such as vegetation, water, soil, etc., and physical methods usually use a threshold value to distinguish cloud pixels from other pixels. The physical method has the advantages of simple model and high calculation speed, but a large number of ground object types which are easily mixed with clouds exist in a remote sensing image, such as desert gobi, ice and snow, ocean specular reflection and the like, the reflectivity of the ground objects is close to that of the clouds, the thin cloud area is semitransparent, only part of ground object information is shielded, a spectrum mixing phenomenon exists, the detection accuracy of the physical method based on the spectrum reflection characteristics is reduced, particularly, the ACCA and the Fmask methods need enough spectrum information to support, and remote sensing data with insufficient spectrum cannot be used.
The texture statistical method determines the category of the texture through the mode and the spatial distribution of the texture, so that the classification of the cloud and the ground features is realized. The texture statistical method makes up the defects of a physical method to a certain extent, improves the cloud detection precision, but is often complex in calculation and long in running time, and because the characteristics such as contrast, fractal dimension and the like are mainly designed manually, and the characteristics of a thin cloud area are often between the characteristics of a thick cloud and the characteristics of ground features, errors are easily distinguished, so that the algorithm generalization capability cannot be guaranteed.
In order to avoid artificial and manual feature design and improve the generalization capability of a cloud detection algorithm, a machine learning technology capable of automatically learning features is widely applied to cloud detection tasks, including artificial neural networks, support vector machines, clustering, random forests, deep learning and the like. With the development of computer technology, cloud detection algorithms based on deep learning gradually become a research hotspot. Although the deep learning method can automatically extract features and minimize human intervention through self-learning of the model, the deep learning requires a huge data set to learn, which is difficult to realize in the field of remote sensing at present. The cloud area is unfixed in shape, random in size and complex in cloud area edge, and the pooling operation is inevitably used for multiple times in the deep learning framework, so that the deep learning method is not fine enough for complex edge detection of the cloud area, particularly the thin cloud area.
The existing cloud detection methods such as a physical method, a texture statistical method and deep learning have the problem that the methods are difficult to solve in the aspect of accurate detection of thin cloud areas. The thick cloud area has obvious characteristics, and most cloud detection algorithms can accurately detect the thick cloud area. The thin cloud area and the thick cloud area are different, the thin cloud attenuates signals collected by the optical sensor and is not completely blocked, and the pixels polluted by the thin cloud still retain some ground feature information, so that ground feature spectral information is doped in the thin cloud area, the thin cloud area envelops the features of the ground features, and the detection difficulty is greatly increased. Since translucent thin clouds are randomly distributed widely in almost all cloud images, the difficulty of thin cloud area detection and the wide distribution make it the most important factor affecting the accuracy of cloud detection.
Disclosure of Invention
In order to solve the problems, the invention provides a cloud detection algorithm combining reinforcement learning and a genetic algorithm. According to the method, a reward and punishment rule is introduced, the algorithm is continuously adaptive to the environment state through continuous interaction of the environment state information of the pixels and the algorithm action, and the genetic algorithm is utilized to evolve towards the direction of maximum accumulated return. The description of the pixel environment state is the premise that the algorithm can interact with the environment, the reward and punishment rule is the basis of strategy learning of the algorithm, and the evolution process is a powerful means for global optimization of the algorithm.
The invention provides a remote sensing image accurate cloud detection method based on reinforcement genetic learning, which comprises the following steps:
s1: image preprocessing, namely filtering noise points existing in the remote sensing image by adopting a median filtering method;
s2: selecting three factors of gray level, tone and space;
s3: color space transformation, namely converting an RGB model into an HSV model to obtain gray level and tone information of a pixel, and then obtaining space information on a gray level layer or a tone layer through description of the relation between a current pixel and a neighborhood pixel;
s4: fusing pixel environment state information; each state information vector of the gray scale, the tone and the spatial information is obtained first, and then the pixel environment state information is obtained according to each state information vector of the gray scale, the tone and the spatial information;
s5: creating "state-action" policy data;
obtaining a data set D through a state-action strategy, calculating the data set D according to an equation (1),
D={(e1,a1),(e2,a2),…,(em,am)} (1)
where E is an element in the pixel environment state information matrix E, a represents an execution action of the reinforcement genetic learning algorithm, and m is pi×qj×rk(ii) a The execution action of the reinforcement genetic learning algorithm is characterized in that the current pixel is judged to be a cloud pixel or a non-cloud pixel, if the current pixel is a cloud pixel, the current pixel is represented by 1, and if the current pixel is a non-cloud pixel, the current pixel is represented by 0;
s6: calculating each individual fitness score according to the reward and punishment strategy, making a corresponding execution action according to the current pixel environment state by using an enhanced genetic learning algorithm, comparing an execution action result with a pixel at the same position in a true value graph, and adding u to the 'state-action' strategy if the execution action result is correct, wherein u is a reward weight; if the execution action result is wrong, subtracting v from the 'state-action' strategy, wherein v is a penalty weight; if the individual final score is negative, setting the score to 0;
s7: initializing setting, namely setting the size of a population, and initializing an action value in 'state-action' strategy data in the population by using a random number generator so as to make the initial state and the action be randomly paired;
s8: a genetic evolution process, wherein the genetic evolution process sequentially comprises selection, crossing and variation; the selection method is roulette, the 'state-action' strategy data is continuously learned through iterative roulette, crossing and variation until an iteration ending condition is met, then morphological improvement is carried out, and finally a final cloud detection result is output.
Preferably, assuming V, H, T that the feature vectors respectively representing gray scale, hue and spatial information all belong to the natural number space N, the pixel environment state information can be represented as a direct product of the three feature vectors:
Figure BDA0002571482590000041
wherein each state information vector of the grey scale, hue and spatial information is generated by a direct product of a plurality of sub-state information vectors as follows:
Figure BDA0002571482590000042
Figure BDA0002571482590000043
Figure BDA0002571482590000044
wherein, ViIs a p-dimensional vector, HjIs a q-dimensional vector, TkFor the r-dimensional vector, the three sub-formulas (3), (4) and (5) are substituted into the formula (2), and the expression of the pixel environment state information can be obtained:
Figure BDA0002571482590000045
and i, j, k belongs to N, and the gray level, the tone and the space state information are fused into E, wherein each element in E comprises three kinds of state information of gray level, tone and space.
Preferably, the winning penalty weight w of step S6 can be calculated by equation (7), the environmental fitness score is represented by equation (8),
Figure BDA0002571482590000046
Figure BDA0002571482590000047
assuming that the number of pixels detected to be correct is n, equation (8) can be expressed as equation (9):
Figure BDA0002571482590000048
wherein N is the number of correct pixels to be detected, N is the total number of pixels, u is the reward weight, and v is the penalty weight.
Preferably, the roulette in step S8 includes the steps of:
s81: calculating the fitness score S of each individual in the populationiWherein i is taken throughout each individual in the population;
s82: calculating the probability P that each individual is retained in the next generation populationi
S83: calculating the cumulative probability Q of each individualiAs shown in equation (10);
Figure BDA0002571482590000051
s84: generating a random number r uniformly distributed in the interval of [0,1 ];
s85: if r is less than Q1Then select Individual 1, otherwise select Individual i such that Qi-1<r<QiIf true;
s86: and repeating the steps S84 and S85G times, wherein G is the population scale.
Preferably, the probability P in step S82iCalculating according to the formula (10);
Figure BDA0002571482590000052
wherein SiFor fitness score, i is each individual in the population and G is the population size.
Preferably, the conversion formula in the step S3 for converting the RGB model into the HSV model is as follows:
Cmax=max(r,g,b) (12)
Cmin=min(r,g,b) (13)
Figure BDA0002571482590000053
Figure BDA0002571482590000054
V=Cmax (16)
wherein H represents a hue; s represents the color saturation; v represents lightness, wherein CmaxIs the maximum value of the gray scale in the three channels of r, g and b, CminThe minimum value of the gray levels in the three channels of r, g and b is shown, r is red, g is green, and b is blue.
Preferably, the image preprocessing in the step S1 is to preprocess the input RGB remote sensing image.
Preferably, in step S7, the population size is set to 90, the crossing rate is set to 0.9, the variation rate is set to 0.001, the maximum cycle number is 1000, and the training is ended when the difference value of the correct values of 10 consecutive times is less than 0.0001 or the maximum cycle number is reached.
Preferably, the method further comprises the following steps: s9: calculating a fitness score according to the selection, crossing and variation results in the step S8, and calculating the accuracy; s10: if the accuracy does not meet the end condition, repeating the steps S8-S9; and if the accuracy meets the end condition, performing morphological improvement to obtain a final cloud detection result.
Preferably, the step S6 of comparing the action execution result with the same-position pixel in the true value map specifically includes: comparing the result of the execution action of the pixel at the same position with the truth value chart, if the result is the same as the truth value chart, the execution action is correct, and if the result is different from the truth value chart, the execution action result is wrong.
Compared with the prior art, the invention has the beneficial effects that:
(1) the reinforcement learning and the genetic algorithm are deeply combined, the environment simulation of the reinforcement learning, the reward and punishment rule and the optimization strategy of the genetic algorithm are introduced into the cloud detection task, and a new way of the pixel-level cloud detection task is developed.
(2) Information influencing pixel classification is fused into a pixel environment state, so that specific characteristics are avoided being manually designed, and the detection accuracy and the generalization capability of the method are improved.
(3) After the optimal state-action strategy data is learned, whether the current pixel is a cloud pixel or not can be judged quickly, and guarantee is provided for realizing quick pixel-level cloud detection.
(4) The invention has simple operation model and strong portability. The method is suitable for parallel computing.
Drawings
Fig. 1 is a schematic diagram of RGB model to HSV model conversion. (a) An RGB model; (b) HSV model.
FIG. 2 is a schematic view of roulette
Fig. 3 is a cross-sectional view. (a) A parent individual; (b) generating filial generation individuals after crossing; e.g. of the typeibiThe position of the gene in the row is a random cross point, and the cross point and the gene sequence behind the cross point are exchanged;
FIG. 4 is a variation diagram. (a) Individuals before mutation; (b) (ii) individuals after mutation; e.g. of the type2b2、ei+1bi+1、em-1bm-1The column positions are the randomly mutated gene positions.
FIG. 5 is a schematic view of the detection of the present invention;
FIG. 6 is a flow chart of the remote sensing image accurate cloud detection method based on the reinforcement genetic learning provided by the invention;
fig. 7 shows a cloud detection result of the original image 1 according to the present invention. (a) Original image 1; (b) detecting the result;
fig. 8 shows a cloud detection result of the original image 2 according to the present invention. (a) Original image 2; (b) detecting the result;
fig. 9 shows a cloud detection result of the original image 3 according to the present invention. (a) Original image 3; (b) and (6) detecting the result.
Detailed Description
The invention is further illustrated by way of example in the following description with reference to the accompanying drawings:
the invention provides a remote sensing image accurate cloud detection method based on reinforcement genetic learning, which specifically comprises the following steps:
s1: preprocessing an image;
noise points may exist in the remote sensing image, and the noise points are filtered out by using median filtering. The basic principle of the method is to replace the value of one point in a digital image or a digital sequence with the median of each point value in a neighborhood of the point, and to make the surrounding pixel values close to the true value, thereby eliminating the isolated noise point. The median filtering has good filtering effect on impulse noise, and particularly, the median filtering can protect the edge of a signal from being blurred while filtering the noise. In addition, the median filtering algorithm is simple and easy to realize by engineering.
S2: pixel environmental state factor selection
The pixel environment state is closely related to the gray scale, hue and spatial correlation. Firstly, the gray information of the remote sensing image is the most important information, and different areas in the remote sensing image can be visually distinguished, mainly because of the gray information in the remote sensing image. Secondly, in the RGB image, the hue is another very important information, and different substances have different spectral reflectances in different wavelength bands, so that different substances in the remote sensing image can show different hues. And finally, spatial information among the pixels represents the structural attribute of the surface of the object, the spatial information of the image and element relevance, and the macroscopic observation result of human vision is better met. Since information in a remote sensing image is complex, and single gray level, tone and space information are not enough to represent the environmental state of a pixel, the three factors are fused for use.
S3: color space transformation
The method provided by the invention inputs an RGB remote sensing image, and color space conversion is firstly carried out in order to obtain gray scale, hue and space information. The HSV colour model, which is built up in terms of hue, saturation and lightness values, is one of the methods, which has the advantage that there is little correlation between the components, and that a change in one of the components does not affect the other components. Gray scale and tone information of the pixel can be obtained through color model transformation, as shown in fig. 1; on the gray level or the tone level, spatial information can be obtained through description of the relationship between the current pixel and the neighboring pixels.
As shown in fig. 1, after the RGB model is converted into the HSV model, H represents hue, measured by angle; s represents the color saturation, and the more saturated the color, the closer the color is to the spectral color, namely the purer the color is; v denotes lightness, which is a measure of how bright a color is, i.e. from black to white. The conversion formula is as follows.
Cmax=max(r,g,b)
Cmin=min(r,g,b)
Figure BDA0002571482590000081
Figure BDA0002571482590000091
V=Cmax
S4: pixel ambient state information fusion
Let V, H, T denote the feature vectors of the gray scale, hue and spatial information, respectively, which all belong to the natural number space N. The pixel ambient state information can be represented as a direct product of the three feature vectors:
Figure BDA0002571482590000092
wherein each state information vector may be generated by a direct product of a plurality of sub-state information vectors:
Figure BDA0002571482590000093
Figure BDA0002571482590000094
Figure BDA0002571482590000095
wherein, ViIs a p-dimensional vector, HjIs a q-dimensional vector, TkIs a vector of dimension r. Then the above three sub-equations are substituted into the following formula, and an expression of the pixel environment state information can be obtained:
Figure BDA0002571482590000096
where i, j, k ∈ N. Thus, we merge the state information into E, where each element in E contains three kinds of state information.
S5: creating "state-action" policy data
To link the pixel environmental state information to the algorithm. We link the pixel state information to the algorithm execution action by creating "state-action" policy data. The algorithm performs only two cases, namely, the current pixel is determined to be a cloud pixel or a non-cloud pixel, and is represented by 1 and 0, respectively. All "state-action" policies may constitute a new data set D, as shown in the following equation:
D={(e1,a1),(e2,a2),…,(em,am)}
where E is an element in the pixel environment state information matrix E, a represents the operation of the algorithm, and m is pi×qj×rk. The algorithm can take corresponding decision actions for the pixels in different environmental states through the state-action strategy data, so as to establish the relation between the pixel environmental state information and the algorithm. The role of the genetic algorithm is to make the model find out an optimal set of 'state-action' strategy data by itself.
S6: calculating fitness score according to reward and punishment strategy
The reward and punishment strategy can be summarized as follows: the model makes corresponding detection action according to the current pixel environment state, compares the detection action result with the same-position pixel in the true value image, and adds u to the 'state-action' strategy if the detection result is correct, wherein u is the reward weight; if the detection result is wrong, subtracting v from the 'state-action' strategy, wherein v is a penalty weight; if the individual's final score is negative, the score is set to 0. The winning penalty weight w of step S6 can be calculated by equation (7), the environmental fitness score is represented by equation (8),
Figure BDA0002571482590000101
Figure BDA0002571482590000102
assuming that the number of pixels detected to be correct is n, equation (8) can be expressed as equation (9):
Figure BDA0002571482590000103
wherein N is the number of correct pixels to be detected, N is the total number of pixels, u is the reward weight, and v is the penalty weight.
S7: initialization
Initial setup is performed first at the beginning of algorithm training. At population sizes greater than 90, the accuracy of the method is insensitive to the process, so our population size is set to 90. A random number generator is then used to initialize action values in the "state-action" policy data in the population, such that the initial state and action are randomly paired. According to repeated experiments on different parameter settings, a parameter combination suitable for the method is found, the crossing rate is finally set to be 0.9, the variation rate is set to be 0.001, and the maximum cycle number is 1000. When the difference value of the correct degree of 10 continuous times is less than 0.0001 or the maximum cycle number is reached, the training is finished.
S8: genetic evolution process
The genetic evolution process comprises three parts: selection, crossover, mutation. The purpose of selection is to select good individuals from the population and eliminate poor individuals. To ensure that each individual in the population has a chance to be retained, thereby ensuring diversity of the population and preventing premature convergence of the algorithm, we use roulette as an alternative, as shown in figure 2.
The roulette game in step S8 includes the following steps:
s81: calculating the fitness score S of each individual in the populationiWherein i is taken throughout each individual in the population;
s82: calculating the probability P that each individual is retained in the next generation populationi
S83: calculating the cumulative probability Q of each individualiAs shown in equation (10);
Figure BDA0002571482590000111
s84: generating a random number r uniformly distributed in the interval of [0,1 ];
s85: if r is less than Q1Then select Individual 1, otherwise select Individual i such that Qi-1<r<QiIf true;
s86: and repeating the steps S84 and S85G times, wherein G is the population scale.
Preferably, the probability P in step S82iCalculating according to the formula (10);
Figure BDA0002571482590000112
the crossing is used for improving the global searching capability of the algorithm; the mutation is to avoid the algorithm from falling into local extreme values and prevent the algorithm from generating premature phenomenon. In the crossing and mutation process, only the action values in the "state-action" strategy need to be crossed and mutated, and the "state-action" strategy has a length of m ═ p, as can be seen from the abovei×qj×rkAs shown in fig. 3 and 4.
S9: calculating a fitness score according to the selection, crossing and variation results in the step S8, and calculating the accuracy;
s10: morphological improvement
After all cloud-like pixels are identified, the present invention morphologically processes them to effectively eliminate bright terrain characterized by large perimeter-to-area ratios. The basic operators in morphology are erosion and dilation, which are directly related to the shape of the object. Since city/building and mountain snow usually consist of isolated pixels, pixel lines and pixel rectangles, they will be removed by erosion of the rectangular or disk-like structural elements. This type of erosion generally cannot clean the entire cloud area because of their low perimeter to area ratio and relatively large shape. To restore the cloud shape, we enlarge the remaining pixels using the same structural elements to preserve the original cloud shape as much as possible. After morphological improvement, a final cloud detection result is obtained.
As shown in fig. 5, the method performs pixel environment state analysis after inputting an original image, extracts three factors affecting the environment state, generates a "state-action" strategy set, learns an optimal strategy through evolution optimization, obtains a preliminary detection result, and improves the morphology of the initial detection result to obtain a final cloud detection result.
As shown in fig. 6, the process of the present invention can be summarized as follows: inputting RGB remote sensing images, and preprocessing the remote sensing images, wherein noise in the remote sensing images is mainly filtered. Through analyzing the pixel environment state, the pixel environment state information required by the model is extracted, state-action strategy data is created, a reward and punishment strategy is introduced, and fitness score of each individual in the population is calculated after the population is initialized. Then, the genetic evolution part is followed, and the 'state-action' strategy data is continuously learned through iterative roulette, intersection and variation until the iteration ending condition is met, then the morphology is improved, and finally the detection result is output.
Fig. 7 to 9 are cloud detection result displays of the present invention, in which the original drawing and the detection result are placed side by side, and we give detail displays at the same position of the original drawing and the result drawing. In fig. 7 to 9, thick clouds which completely shield the ground features and semi-transparent thin clouds which obscure the ground feature information are provided, and it can be seen from the display of the detail parts that the accurate cloud detection method for the remote sensing image provided by the invention has a good effect on the detection of the thin cloud area.
The above description is only an example of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A remote sensing image accurate cloud detection method based on reinforcement genetic learning is characterized by comprising the following steps:
s1: image preprocessing, namely filtering noise points existing in the remote sensing image by adopting a median filtering method;
s2: selecting three factors of gray level, tone and space;
s3: color space transformation, namely converting an RGB model into an HSV model to obtain gray level and tone information of a pixel, and then obtaining space information on a gray level layer or a tone layer through description of the relation between a current pixel and a neighborhood pixel;
s4: fusing pixel environment state information; each state information vector of the gray scale, the tone and the spatial information is obtained first, and then the pixel environment state information is obtained according to each state information vector of the gray scale, the tone and the spatial information;
s5: creating "state-action" policy data;
obtaining a data set D through a state-action strategy, calculating the data set D according to an equation (1),
D={(e1,a1),(e2,a2),…,(em,am)} (1)
where E is an element in the pixel environment state information matrix E, a represents an execution action of the reinforcement genetic learning algorithm, and m is pi×qj×rk(ii) a The execution action of the reinforcement genetic learning algorithm is characterized in that the current pixel is judged to be a cloud pixel or a non-cloud pixel, if the current pixel is a cloud pixel, the current pixel is represented by 1, and if the current pixel is a non-cloud pixel, the current pixel is represented by 0;
s6: calculating each individual fitness score according to the reward and punishment strategy, making a corresponding execution action according to the current pixel environment state by using an enhanced genetic learning algorithm, comparing an execution action result with a pixel at the same position in a true value graph, and adding u to the 'state-action' strategy if the execution action result is correct, wherein u is a reward weight; if the execution action result is wrong, subtracting v from the 'state-action' strategy, wherein v is a penalty weight; if the individual final score is negative, setting the score to 0;
s7: initializing setting, namely setting the size of a population, and initializing an action value in 'state-action' strategy data in the population by using a random number generator so as to make the initial state and the action be randomly paired;
s8: a genetic evolution process, wherein the genetic evolution process sequentially comprises selection, crossing and variation; the selection method is roulette, the 'state-action' strategy data is continuously learned through iterative roulette, crossing and variation until an iteration ending condition is met, then morphological improvement is carried out, and finally a final cloud detection result is output.
2. The method for accurately detecting the cloud in the remote sensing image according to claim 1, wherein V, H, T feature vectors respectively representing gray scale, hue and space information belong to a natural number space N, and the pixel environment state information can be represented as a direct product of the three feature vectors:
Figure FDA0002571482580000021
wherein each state information vector of the grey scale, hue and spatial information is generated by a direct product of a plurality of sub-state information vectors as follows:
Figure FDA0002571482580000022
Figure FDA0002571482580000023
Figure FDA0002571482580000024
wherein, ViIs a p-dimensional vector, HjIs a q-dimensional vector, TkFor the r-dimensional vector, the three sub-formulas (3), (4) and (5) are substituted into the formula (2), and the expression of the pixel environment state information can be obtained:
Figure FDA0002571482580000025
and i, j, k belongs to N, and the gray level, the tone and the space state information are fused into E, wherein each element in E comprises three kinds of state information of gray level, tone and space.
3. The remote sensing image accurate cloud detection method of claim 1, wherein the winning penalty weight w in step S6 can be calculated by formula (7), the environmental fitness score is represented by formula (8),
Figure FDA0002571482580000026
Figure FDA0002571482580000027
assuming that the number of pixels detected to be correct is n, equation (8) can be expressed as equation (9):
Figure FDA0002571482580000028
wherein N is the number of correct pixels to be detected, N is the total number of pixels, u is the reward weight, and v is the penalty weight.
4. The remote sensing image accurate cloud detection method of claim 1, wherein the roulette betting in step S8 includes the steps of:
s81: calculating the fitness score S of each individual in the populationiWherein i is taken throughout each individual in the population;
s82: calculating the probability P that each individual is retained in the next generation populationi
S83: calculating the cumulative probability Q of each individualiAs shown in equation (10);
Figure FDA0002571482580000031
s84: generating a random number r uniformly distributed in the interval of [0,1 ];
s85: if r is less than Q1Then select Individual 1, otherwise select Individual i such that Qi-1<r<QiIf true;
s86: and repeating the steps S84 and S85G times, wherein G is the population scale.
5. The method for accurately detecting the cloud in the remote sensing image according to claim 4, wherein the probability P in the step S82 isiCalculating according to the formula (10);
Figure FDA0002571482580000032
wherein SiFor fitness score, i is each individual in the population and G is the population size.
6. The method for accurately detecting the cloud in the remote sensing image according to claim 1, wherein a conversion formula in the step S3 for converting the RGB model into the HSV model is as follows:
Cmax=max(r,g,b) (12)
Cmin=min(r,g,b) (13)
Figure FDA0002571482580000033
Figure FDA0002571482580000041
V=Cmax (16)
wherein H represents a hue; s represents the color saturation; v represents lightness, wherein CmaxIs the maximum value of the gray scale in the three channels of r, g and b, CminThe minimum value of the gray levels in the three channels of r, g and b is shown, r is red, g is green, and b is blue.
7. The method for accurately detecting the cloud in the remote sensing images according to claim 1, wherein the image preprocessing in the step S1 is to preprocess the input RGB remote sensing images.
8. The method for accurately detecting the cloud in the remote sensing images as claimed in claim 1, wherein in step S7, the population size is set to 90, the crossing rate is set to 0.9, the variation rate is set to 0.001, the maximum cycle number is 1000, and when the difference value of the correct values of 10 consecutive times is less than 0.0001 or the maximum cycle number training is finished.
9. The accurate cloud detection method for remote sensing images as claimed in claim 1, further comprising the steps of: s9: calculating a fitness score according to the selection, crossing and variation results in the step S8, and calculating the accuracy; s10: if the accuracy does not meet the end condition, repeating the steps S8-S9; and if the accuracy meets the end condition, performing morphological improvement to obtain a final cloud detection result.
10. The method for accurately detecting the cloud of the remote sensing image according to claim 1, wherein the step S6 of comparing the action execution result with the co-located pixel in the true value image specifically comprises: comparing the result of the execution action of the pixel at the same position with the truth value chart, if the result is the same as the truth value chart, the execution action is correct, and if the result is different from the truth value chart, the execution action result is wrong.
CN202010642008.3A 2020-07-06 2020-07-06 Remote sensing image accurate cloud detection method based on reinforcement genetic learning Active CN112149492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010642008.3A CN112149492B (en) 2020-07-06 2020-07-06 Remote sensing image accurate cloud detection method based on reinforcement genetic learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010642008.3A CN112149492B (en) 2020-07-06 2020-07-06 Remote sensing image accurate cloud detection method based on reinforcement genetic learning

Publications (2)

Publication Number Publication Date
CN112149492A true CN112149492A (en) 2020-12-29
CN112149492B CN112149492B (en) 2022-08-30

Family

ID=73889132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010642008.3A Active CN112149492B (en) 2020-07-06 2020-07-06 Remote sensing image accurate cloud detection method based on reinforcement genetic learning

Country Status (1)

Country Link
CN (1) CN112149492B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408370A (en) * 2021-05-31 2021-09-17 西安电子科技大学 Forest change remote sensing detection method based on adaptive parameter genetic algorithm
CN114723960A (en) * 2022-04-02 2022-07-08 湖南三湘银行股份有限公司 Additional verification method and system for enhancing bank account security

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357872A1 (en) * 2015-12-07 2017-12-14 The Climate Corporation Cloud detection on remote sensing imagery
CN110119728A (en) * 2019-05-23 2019-08-13 哈尔滨工业大学 Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357872A1 (en) * 2015-12-07 2017-12-14 The Climate Corporation Cloud detection on remote sensing imagery
CN110119728A (en) * 2019-05-23 2019-08-13 哈尔滨工业大学 Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI YINGZI 等: "Multi-agent Co-evolutionary Scheduling Approach based on Genetic Reinforcement Learning", 《2009 FIFTH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION》 *
夏旻 等: "多维加权密集连接卷积网络的卫星云图云检测", 《计算机工程与应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408370A (en) * 2021-05-31 2021-09-17 西安电子科技大学 Forest change remote sensing detection method based on adaptive parameter genetic algorithm
CN113408370B (en) * 2021-05-31 2023-12-19 西安电子科技大学 Forest change remote sensing detection method based on adaptive parameter genetic algorithm
CN114723960A (en) * 2022-04-02 2022-07-08 湖南三湘银行股份有限公司 Additional verification method and system for enhancing bank account security

Also Published As

Publication number Publication date
CN112149492B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
CN109934154B (en) Remote sensing image change detection method and detection device
CN110765941A (en) Seawater pollution area identification method and equipment based on high-resolution remote sensing image
CN101814144B (en) Water-free bridge target identification method in remote sensing image
CN103049763B (en) Context-constraint-based target identification method
CN111696123A (en) Remote sensing image water area segmentation and extraction method based on super-pixel classification and identification
CN105279519B (en) Remote sensing image Clean water withdraw method and system based on coorinated training semi-supervised learning
CN113239830B (en) Remote sensing image cloud detection method based on full-scale feature fusion
CN107909015A (en) Hyperspectral image classification method based on convolutional neural networks and empty spectrum information fusion
CN108010034A (en) Commodity image dividing method and device
CN106845408A (en) A kind of street refuse recognition methods under complex environment
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN112149492B (en) Remote sensing image accurate cloud detection method based on reinforcement genetic learning
CN104217440B (en) A kind of method extracting built-up areas from remote sensing images
CN112488050A (en) Color and texture combined aerial image scene classification method and system
CN106228130B (en) Remote sensing image cloud detection method of optic based on fuzzy autoencoder network
CN107545571A (en) A kind of image detecting method and device
CN109740485A (en) Reservoir or dyke recognition methods based on spectrum analysis and depth convolutional neural networks
Chini et al. Comparing statistical and neural network methods applied to very high resolution satellite images showing changes in man-made structures at rocky flats
CN107992856A (en) High score remote sensing building effects detection method under City scenarios
CN112733614A (en) Pest image detection method with similar size enhanced identification
CN107292328A (en) The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion
CN109886146A (en) Flood information remote-sensing intelligent acquisition method and equipment based on Machine Vision Detection
CN116863345A (en) High-resolution image farmland recognition method based on dual attention and scale fusion
CN109829507A (en) It takes photo by plane ultra-high-tension power transmission line environment detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant