CN112131946A - Automatic extraction method for vegetation and water body information of optical remote sensing image - Google Patents

Automatic extraction method for vegetation and water body information of optical remote sensing image Download PDF

Info

Publication number
CN112131946A
CN112131946A CN202010850663.8A CN202010850663A CN112131946A CN 112131946 A CN112131946 A CN 112131946A CN 202010850663 A CN202010850663 A CN 202010850663A CN 112131946 A CN112131946 A CN 112131946A
Authority
CN
China
Prior art keywords
vegetation
remote sensing
samples
optical remote
water body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010850663.8A
Other languages
Chinese (zh)
Other versions
CN112131946B (en
Inventor
欧阳斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Yinhan Technology Co ltd
Original Assignee
Changsha Yinhan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Yinhan Technology Co ltd filed Critical Changsha Yinhan Technology Co ltd
Priority to CN202010850663.8A priority Critical patent/CN112131946B/en
Publication of CN112131946A publication Critical patent/CN112131946A/en
Application granted granted Critical
Publication of CN112131946B publication Critical patent/CN112131946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an automatic extraction method of vegetation and water body information of optical remote sensing images, which comprises the steps of obtaining optical remote sensing data samples, and randomly extracting 10% of the samples in the optical remote sensing data samples as sample subsets; calculating the normalized vegetation index and the normalized water index of all samples in the sample subset, acquiring a general characteristic spectrum, performing supervision and classification based on a minimum spectral angle on all samples in the sample subset, and simultaneously recording the minimum spectral angle corresponding to the vegetation type, the water body type and other types; and taking the smallest first 50% samples in the smallest spectral angle, performing k-means unsupervised classification based on the smallest Euclidean distance on the first 50% samples, obtaining 10 characteristic spectra for each type, totaling 30 characteristic spectra, and performing pixel-by-pixel supervised classification based on the smallest Euclidean distance on the global image to obtain the extraction results of the vegetation and the water body. And no prior sample is needed to support all the time, and no manual intervention is needed, so that the full-automatic extraction of vegetation and water body information is realized.

Description

Automatic extraction method for vegetation and water body information of optical remote sensing image
Technical Field
The invention relates to the technical field of remote sensing image extraction, in particular to an automatic extraction method of vegetation and water body information of an optical remote sensing image.
Background
The supervision and classification method comprises the following steps: the method is a method for pattern recognition, and is also called training field method and training classification method, which is a technique for classifying images to be classified by establishing a statistical recognition function as a theoretical basis and a typical sample training method, namely, according to samples provided by a known training area, by selecting characteristic parameters, solving the characteristic parameters as a decision rule and establishing a discrimination function to classify the images to be classified. The training area is required to be typical and representative. If the judgment criterion meets the requirement of classification precision, the criterion is satisfied; otherwise, the decision rule of classification needs to be re-established until the requirement of classification precision is met. The method requires a priori training samples, so that full-automatic classification cannot be realized. The sample obtained from one image cannot be directly used for the classification, extrapolation and generalization of the other image;
the unsupervised classification method comprises the following steps: the classification process does not apply any prior knowledge, and only carries out 'blind' classification by data (the distribution rule of the spectral features of the ground features of the remote sensing images), namely the characteristics of natural clustering; the classification result is only to distinguish different classes, but cannot determine the attributes of the classes, namely: unsupervised classification can only distinguish samples into several categories, and cannot give descriptions of the samples; the attributes of the categories are determined by visual interpretation or field investigation after the classification is complete. The method has the disadvantages that the classification target is not clear, and although automatic classification can be realized, the classification result often cannot meet the practical requirement;
the random forest classification method comprises the following steps: an algorithm for integrating multiple trees by the idea of Ensemble Learning, the basic unit of which is a decision tree, and the essence of which belongs to a large branch of machine Learning, namely an Ensemble Learning (Ensemble Learning) method. Each decision tree is a classifier (assuming the classification problem is now addressed), then for an input sample, N trees will have N classification results. And the random forest integrates all classification voting results, and the classification with the largest voting times is designated as final output. Like supervised classification, random forests also require a large number of prior samples, and full-automatic classification in the true sense cannot be realized. The deep learning method comprises the following steps: is a new research direction in the field of Machine Learning (ML), which is introduced into Machine Learning to make it closer to the original goal, artificial intelligence. Through multilayer processing, after the initial low-level feature representation is gradually converted into the high-level feature representation, the complex classification and other learning tasks can be completed by using a simple model. Thus, deep learning can be understood as "feature learning" or "representation learning". Theoretically, the larger the sample size, the more reliable the recognition accuracy. This method also requires a priori labeled training samples, and the process of obtaining labeled samples requires a significant amount of labor and time.
Among the three methods, the supervised classification method and the random forest classification method both need training samples, but need long-time training and learning process, and have insufficient extrapolation and generalization capabilities, while the unsupervised classification method does not need training samples, but the classification result only distinguishes different classes, but cannot determine the attributes of the classes, and the classification result often cannot meet the practical requirements.
Disclosure of Invention
Aiming at the problems, the invention provides an automatic extraction method of vegetation and water body information of an optical remote sensing image, aiming at realizing high-efficiency and high-precision classification of random samples of any remote sensing image on the premise of no training sample and realizing full-automatic vegetation and water body information extraction in a real sense.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an automatic extraction method for vegetation and water body information of optical remote sensing images comprises the following steps:
acquiring an optical remote sensing data sample, and randomly extracting 10% of samples in the optical remote sensing data sample as a sample subset;
calculating the normalized vegetation index and the normalized water index of all samples in the sample subset, taking the average spectrum of the first 1 per thousand samples with the maximum normalized vegetation index as a general characteristic spectrum A of the vegetation type, taking the average spectrum of the first 1 per thousand samples with the maximum normalized water index as a general characteristic spectrum B of the water body type, taking the sample with the normalized vegetation index smaller than 0.1, taking the minimum sample set with the normalized water index in the samples with the normalized vegetation index smaller than 0.1, wherein the minimum sample set is taken as 1 per thousand of the sample subset, and taking the average spectrum of the minimum sample set as a general characteristic spectrum C of other types;
according to the obtained general characteristic spectrum A, the general characteristic spectrum B and the general characteristic spectrum C, performing supervision classification based on a minimum spectrum angle on all samples in the sample subset, and simultaneously recording the size of the minimum spectrum angle corresponding to the vegetation type, the water body type and other types; respectively taking the minimum first 50% samples in the minimum spectrum angles corresponding to the vegetation type, the water body type and other types, and performing k-means unsupervised classification based on the minimum Euclidean distance on the first 50% samples, wherein 10 characteristic spectra are obtained for each type, and 30 characteristic spectra are calculated in total;
and performing pixel-by-pixel supervision and classification based on the minimum Euclidean distance on the global image IMG according to the 30 characteristic spectrums to obtain the extraction results of the vegetation and the water body.
In some embodiments, the normalized vegetation index and the normalized water index are calculated as follows:
Figure BDA0002644621170000021
Figure BDA0002644621170000022
wherein NDVI is a normalized vegetation index and NDWI is a normalized water index; rnir, Rred and Rgreen are spectral reflectivities of a near infrared band, a red band and a green band respectively.
In some embodiments, the minimum spectral angle is calculated as follows:
Figure BDA0002644621170000023
wherein t is a general characteristic spectrum vector, r is a sample vector to be classified, and n is the number of image bands.
In some embodiments, each of the 30 characteristic spectra has n image bands and a label, and the labels are respectively configured as: vegetation is 1, water is 2, others are 3.
In some embodiments, after the optical remote sensing data sample is obtained, radiometric calibration processing is further included, wherein radiometric calibration is performed on the optical remote sensing data according to the type of the sensor used for obtaining the optical remote sensing data sample, and the DN value of the optical remote sensing data sample is converted into radiance, and the radiance is calculated as follows:
Lλ=Gain*DN
in the formula, Gain is a calibration coefficient, DN is an observed value of the satellite-borne sensor, and LλIs the converted radiance.
In some embodiments, after the radiation calibration, the method further comprises an atmosphere correction process, namely firstly, performing unit conversion on the preprocessed optical remote sensing data sample according to the input requirement of a FLAASH model, converting the data storage format of the optical remote sensing data sample into BIL, setting the height of a sensor, the pixel arrival correction, the atmosphere model and the aerosol model according to file information of a satellite remote sensing data head, and finally, executing the atmosphere correction process.
In some embodiments, after the atmospheric correction processing, the method further comprises an orthorectification processing, wherein the geometric distortion correction is carried out on the optical remote sensing data sample after the atmospheric correction, the inclination correction and the projective aberration correction are carried out on the image at the same time, and the image is resampled into an orthorectification image.
In some embodiments, after the orthorectification, a geometric fine-correction process is further included, in which a SIFT operator is used to automatically search for homonymous connection points between the image to be corrected and the reference image, coarse difference points with large errors are automatically screened out, and finally a coordinate transformation relation between the image to be corrected and the reference image is obtained, so that the geometric positioning precision of the corrected image is within one pixel.
The beneficial effects are that: the present invention uses their average spectra to find the most deterministic sample of 50% with the most similar spectral shape at different reflectance levels by taking the sample with the greatest certainty. And then the sample with larger certainty of 50 percent and richer and more comprehensive spectral information is used for estimating another sample with smaller certainty of 50 percent, compared with visual interpretation, the classification result with the precision of more than 95 percent can be finally obtained, no prior sample is required to be used as support all the time, and no manual intervention is needed, so that the full-automatic extraction of the vegetation and water body information is realized.
In addition, the method is self-adaptive, does not depend on a certain satellite or a sensor, does not depend on imaging time, place or imaging conditions, is a universal method, is efficient and quick, takes home-made high-resolution one-size wide images with the resolution of 16 meters and the 4 wave bands as an example, and can finish the automatic extraction of vegetation and water body information in less than 10 minutes by taking images with the size of 229 kilometer by 238 kilometers.
The method is suitable for rapidly and accurately extracting the information of the vegetation and the water body in a large range, thereby effectively serving the application fields of large-area flood disaster monitoring, drought monitoring, crop identification, forest land investigation and the like.
Drawings
FIG. 1 is a schematic view of a high resolution first WFV pseudo color composite image according to an embodiment of the present invention;
FIG. 2 is a schematic view of an initial sample of vegetation, bodies of water, and other types disclosed in embodiments of the present invention;
FIG. 3 is a schematic diagram of general characteristic spectra of vegetation, bodies of water, and other types disclosed in embodiments of the present invention;
FIG. 4 is a schematic diagram of the results of preliminary classification of vegetation, bodies of water, and other types disclosed in embodiments of the present invention;
FIG. 5 is a schematic diagram of differentiated signature spectra of vegetation, bodies of water, and other types disclosed in embodiments of the invention;
FIG. 6 is a schematic diagram of final classification results for vegetation, bodies of water, and other types disclosed in embodiments of the present invention;
fig. 7 is a schematic diagram of special extraction of image vegetation and water bodies of 19-day sentinel in 2019 in kaifeng city of south-Henan province in accordance with the present invention;
fig. 8 is a schematic diagram of WFV image vegetation and water body special extraction on the high score first number of 12 months and 27 days in 2019 in leling county, Shandong province, disclosed in the embodiment of the present invention;
fig. 9 is a flowchart of an automatic extraction method of vegetation and water body information in an optical remote sensing image disclosed in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the following detailed description of the present invention is provided with reference to the accompanying drawings and detailed description. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings.
As shown in fig. 9, the embodiment provides an automatic extraction method of vegetation and water body information in an optical remote sensing image, which includes the following steps:
step 1, obtaining an optical remote sensing data sample, and randomly extracting 10% of the optical remote sensing data sample as a sample subset;
in the first step, the optical remote sensing data sample is mainly obtained and downloaded through high-resolution satellite optical remote sensing and digital elevation model data. According to the research requirement, the remote sensing data of the satellite No. 1 with the high score in the research area is obtained, the cloud coverage rate is lower than 10%, and the remote sensing data of the satellite No. 1 with the high score can be obtained through application of a China resource satellite application center. The high-score No. 1 satellite can select a 16-meter multispectral camera or an 8-meter panchromatic multispectral camera to observe data. The embodiment takes the data collected by a multispectral camera with 16 meters as an example.
The website of the China resource satellite application center is http:// www.cresda.com/CN/; researchers can obtain optical remote sensing data samples through the website.
Step 1.1, optical remote sensing data sample preprocessing is further included, and the preprocessing mainly includes radiometric calibration processing, atmospheric correction processing and orthometric correction processing
After the optical remote sensing data sample is obtained, radiometric calibration processing is further included, wherein radiometric calibration is carried out on the optical remote sensing data according to the type of the sensor for obtaining the optical remote sensing data sample, the DN value of the optical remote sensing data sample is converted into radiance, and the radiance is calculated as follows:
Lλ=Gain*DN
in the formula, Gain is a calibration coefficient, DN is an observed value of the satellite-borne sensor, and LλIs the converted radiance.
Firstly, unit conversion is carried out on the preprocessed optical remote sensing data sample according to the input requirement of a FLAASH model, the data storage format of the optical remote sensing data sample is converted into BIL, the height of a sensor, pixel arrival correction, an atmosphere model and an aerosol model are set according to file information of a satellite remote sensing data head, and finally, atmosphere correction processing is carried out. The atmospheric correction processing aims at eliminating the influence of absorption and scattering of the atmosphere on the surface reflectivity, eliminating radiation errors caused by atmospheric influence and inverting the reflectivity of ground objects. In the embodiment, ENVI/FLAASH is adopted to carry out atmospheric correction processing on the data of the high score No. 1.
After the atmospheric correction, the method also comprises an orthorectification treatment, namely, carrying out geometric distortion correction on the optical remote sensing data sample after the atmospheric correction, simultaneously carrying out inclination correction and projection error correction on the image, and resampling the image into an orthorectification image. The orthorectification is an image processing method for eliminating image deformation caused by topographic relief, high-grade satellite data comprises an RPC file, and an orthorectification tool based on the RPC in ENVI software can be used for performing orthorectification.
After the orthorectification, geometric fine-correction processing is also included, namely SIFT operators are adopted to automatically search homonymous connection points between the image to be corrected and the reference image, coarse difference points with larger errors are automatically screened out, and finally, a coordinate conversion relation between the image to be corrected and the reference image is obtained, so that the geometric positioning precision of the corrected image is within one pixel, and the requirement of multi-time-phase dynamic monitoring on the position precision is met. Geometric refinement refers to the removal of geometric distortions in the image to produce a new image that meets certain map projection or graphic presentation requirements.
The preprocessed high-resolution one-picture is shown in fig. 1 (a sub-picture block with 3000 pixels by 3000 pixels, that is, 48 km by 48 km area size is clipped as sample data of the present embodiment).
And acquiring a random sample by combining uniform sampling and random sampling for the preprocessed N-row and M-column images. Specifically, the whole scene image is divided into a segment every 10 pixels, and one pixel is randomly selected from every 10 pixels and added into a random sample set to obtain a sample with the number of pixels accounting for about 10% of the whole image.
Step 2, calculating the normalized vegetation index and the normalized water index of all samples in the sample subset, taking the average spectrum of the first 1 per thousand samples with the maximum normalized vegetation index as a general characteristic spectrum A of the vegetation type, taking the average spectrum of the first 1 per thousand samples with the maximum normalized water index as a general characteristic spectrum B of the water body type, taking the sample with the normalized vegetation index smaller than 0.1, taking the minimum sample set with the normalized water index from the samples with the normalized vegetation index smaller than 0.1, wherein the minimum sample set is 1 per thousand of the sample subset, and taking the average spectrum of the minimum sample set as a general characteristic spectrum C of other types; (the minimum sample set is the first 1% o of the subset of samples taken and not the first 1% o of the samples for which the normalized vegetation index is less than 0.1. for example, there are 10000 samples for which the normalized vegetation index is less than 0.1 and 50000 samples for which there are now available, 50 samples from 10000 are taken as the first 1% o of the samples and not 10, since 50000 has a 1% o of 50)
Chlorophyll contained in the green vegetation has high reflection characteristic to near-infrared band radiation, and the water body has high absorption and low reflection characteristic to the near-infrared band radiation. Generally, the higher the normalized vegetation index NDVI, the more obvious the vegetation characteristics, and the higher the normalized vegetation index NDWI, the more obvious the water characteristics.
The calculation formula of the normalized vegetation index and the normalized water index is as follows:
Figure BDA0002644621170000061
Figure BDA0002644621170000062
wherein NDVI is a normalized vegetation index and NDWI is a normalized water index; rnir, Rred and Rgreen are spectral reflectivities of a near infrared band, a red band and a green band respectively.
Sorting the NDVI from large to small, taking the first 1 per mill sample with the largest NDVI, and calculating an average spectrum as a general characteristic spectrum of the vegetation type; sorting the NDWIs from large to small, taking the first 1 per mill sample with the largest NDWI, and calculating an average spectrum as a general characteristic spectrum of the water body type; the average spectrum of the first 1% o (based on the total number of sample subsets) sample with NDVI <0.1 and the smallest NDWI in the sample is calculated as the other type of general characteristic spectrum. Three types of initial samples are shown in fig. 2, and three types of general characteristic spectra are shown in fig. 3. It can be seen from the figure that the three spectral curves are very different in shape, have typical representativeness, and have adaptivity because they come from the image itself.
Step 3, according to the obtained general characteristic spectrum A, the general characteristic spectrum B and the general characteristic spectrum C, performing supervision and classification based on a minimum spectrum angle on all samples in the sample subset, and simultaneously recording the minimum spectrum angle corresponding to the vegetation type, the water body type and other types; respectively taking the minimum first 50% samples in the minimum spectrum angles corresponding to the vegetation type, the water body type and other types, and performing k-means unsupervised classification based on the minimum Euclidean distance on the first 50% samples, wherein 10 characteristic spectra are obtained for each type, and 30 characteristic spectra are calculated in total; in 30 characteristic spectra, each characteristic spectrum has N image wave band numbers (N is the wave band number, e.g. WFV image of high score one is 4 wave bands, and image of sentinel second is 10 wave bands) and a label, and the labels are respectively set as: vegetation is 1, water is 2, others are 3.
The minimum spectral angle is calculated as follows:
Figure BDA0002644621170000063
wherein t is a general characteristic spectrum vector, r is a sample vector to be classified, and n is the number of image bands.
Supervised classification based on minimum spectral angle was performed on a random sample set with the three general characteristic spectra described above, while recording the minimum spectral angle in radians. The smaller the spectral angle, the more similar the spectral curve morphology, and the use of spectral angle classification facilitates the discovery of numerous samples that are morphologically similar at different reflectance levels. For the vegetation, the water body and other three types, the spectral angles are respectively sorted from small to large, the first 50% of samples are taken to be subjected to K-means unsupervised classification based on the minimum Euclidean distance, the primary classification result is shown in figure 4, and the differentiated characteristic spectrum is shown in figure 5.
The K-means unsupervised classification algorithm is as follows:
first, K samples (K is 10 in this embodiment) are randomly selected as initial cluster centers. The distance between each sample and the respective seed cluster center is then calculated, and each sample is assigned to the cluster center closest to it. The cluster centers and the objects assigned to them represent a cluster. The average spectrum of the existing samples in each cluster is calculated as the new cluster center. And repeating the loop until any one of the following conditions is met:
1) no (or a minimum number, the present invention takes 1% of the total number of samples) objects are reassigned to different clusters.
2) The number of loop iterations reaches a certain number (the invention takes 100 iterations).
And 4, performing pixel-by-pixel supervised classification based on the minimum Euclidean distance on the global image IMG according to the 30 characteristic spectrums obtained in the step to obtain the extraction results of the vegetation and the water body. Obtaining the classification results of the vegetation, the water body and other three types, and realizing the self-adaptive full-automatic extraction of the vegetation and the water body. The extraction results are shown in fig. 6.
The calculation formula of the euclidean distance is as follows:
Figure BDA0002644621170000071
example verification
In order to test the vegetation and water body information extraction effect, a second image of a sentinel in Henan province, which is opened in 2019, 11 months and 19 days is collected, and the resolution is 10 meters; the resolution of a high-resolution one-size WFV image of 27 days 12 months 2019 in Shandong Lelingun county is 16 meters. The method provided by the invention is adopted to classify the images of two scenes of different time and places and different satellite sensors, and the images are compared with the precision of the result of manual visual interpretation, and the result shows that the overall classification precision of the two scenes of images is more than 95%, and the specific conditions are respectively shown in figures 7 and 8.
The present invention uses their average spectra to find the most deterministic sample of 50% with the most similar spectral shape at different reflectance levels by taking the sample with the greatest certainty. And then the sample with larger certainty of 50 percent and richer and more comprehensive spectral information is used for estimating another sample with smaller certainty of 50 percent, compared with visual interpretation, the classification result with the precision of more than 95 percent can be finally obtained, no prior sample is required to be used as support all the time, and no manual intervention is needed, so that the full-automatic extraction of the vegetation and water body information is realized.
In addition, the method is self-adaptive, does not depend on a certain satellite or a sensor, does not depend on imaging time, place or imaging conditions, is a universal method, is efficient and quick, takes home-made high-resolution one-size wide images with the resolution of 16 meters and the 4 wave bands as an example, and can finish the automatic extraction of vegetation and water body information in less than 10 minutes by taking images with the size of 229 kilometer by 238 kilometers.
The method is suitable for rapidly and accurately extracting the information of the vegetation and the water body in a large range, thereby effectively serving the application fields of large-area flood disaster monitoring, drought monitoring, crop identification, forest land investigation and the like.
The above embodiments are only for illustrating the technical concept and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention accordingly, and not to limit the protection scope of the present invention accordingly. All equivalent changes or modifications made in accordance with the spirit of the present disclosure are intended to be covered by the scope of the present disclosure.

Claims (8)

1. An automatic extraction method for vegetation and water body information of optical remote sensing images is characterized by comprising the following steps:
acquiring an optical remote sensing data sample, and randomly extracting 10% of samples in the optical remote sensing data sample as a sample subset;
calculating the normalized vegetation index and the normalized water index of all samples in the sample subset, taking the average spectrum of the first 1 per thousand samples with the maximum normalized vegetation index as a general characteristic spectrum A of a vegetation type, taking the average spectrum of the first 1 per thousand samples with the maximum normalized water index as a general characteristic spectrum B of a water body type, taking the sample with the normalized vegetation index smaller than 0.1, taking the minimum sample set with the normalized water index in the samples with the normalized vegetation index smaller than 0.1, taking the minimum sample set with the number of 1 per thousand of the sample subset, and taking the average spectrum of the minimum sample set as a general characteristic spectrum C of other types;
according to the obtained general characteristic spectrum A, general characteristic spectrum B and general characteristic spectrum C, performing supervised classification based on a minimum spectral angle on all samples in the sample subset, and simultaneously recording the size of the minimum spectral angle corresponding to the vegetation type, the water body type and other types; respectively taking the minimum first 50% samples in the minimum spectrum angles corresponding to the vegetation type, the water body type and other types, and performing k-means unsupervised classification based on the minimum Euclidean distance on the first 50% samples, wherein 10 characteristic spectra are obtained for each type, and 30 characteristic spectra are calculated in total;
and performing pixel-by-pixel supervision and classification based on the minimum Euclidean distance on the global image IMG according to the 30 characteristic spectrums to obtain the extraction results of the vegetation and the water body.
2. The method for automatically extracting vegetation and water body information from optical remote sensing images according to claim 1, wherein the calculation formulas of the normalized vegetation index and the normalized water index are as follows:
Figure FDA0002644621160000011
Figure FDA0002644621160000012
wherein NDVI is a normalized vegetation index and NDWI is a normalized water index; rnir, Rred and Rgreen are spectral reflectivities of a near infrared band, a red band and a green band respectively.
3. The method for automatically extracting vegetation and water body information from optical remote sensing images according to claim 1, wherein the minimum spectral angle is calculated according to the following formula:
Figure FDA0002644621160000013
wherein t is a general characteristic spectrum vector, r is a sample vector to be classified, and n is the number of image bands.
4. The method for automatically extracting vegetation and water body information through optical remote sensing images according to claim 1, wherein each characteristic spectrum in the 30 characteristic spectra has n image wave band numbers and a label, and the labels are respectively set as: vegetation is 1, water is 2, others are 3.
5. The method for automatically extracting vegetation and water body information in optical remote sensing images according to claim 1, further comprising radiometric calibration processing after the optical remote sensing data samples are obtained, wherein radiometric calibration is carried out on the optical remote sensing data according to the sensor types of the obtained optical remote sensing data samples, DN values of the optical remote sensing data samples are converted into radiance, and the radiance is calculated as follows:
Lλ=Gain*DN
in the formula, Gain is a calibration coefficient, and DN is an observed value of the satellite-borne sensor,LλIs the converted radiance.
6. The method for automatically extracting vegetation and water body information in optical remote sensing images as claimed in claim 5, wherein the method further comprises atmospheric correction processing after the radiometric calibration, and comprises the steps of firstly, performing unit conversion on the preprocessed optical remote sensing data samples according to input requirements of a FLAASH model, converting the data storage format of the optical remote sensing data samples into BIL, setting the height of a sensor, pixel arrival correction, an atmospheric model and an aerosol model according to file information of a satellite remote sensing data head, and finally, performing atmospheric correction processing.
7. The method for automatically extracting vegetation and water body information from optical remote sensing images according to claim 6, wherein after the atmospheric correction processing, the method further comprises an orthorectification processing, wherein the geometric distortion correction is carried out on the optical remote sensing data samples after the atmospheric correction, the inclination correction and the projective aberration correction are simultaneously carried out on the images, and the images are resampled into orthorectification images.
8. The method for automatically extracting vegetation and water body information from optical remote sensing images as claimed in claim 7, wherein after the orthorectification, geometric fine rectification is further included, and SIFT operators are adopted to automatically search homonymous connection points between the images to be rectified and the reference images, coarse difference points with large errors are automatically screened out, and finally, a coordinate transformation relation between the images to be rectified and the reference images is obtained, so that the geometric positioning precision of the rectified images is within one pixel.
CN202010850663.8A 2020-08-21 2020-08-21 Automatic extraction method for vegetation and water information of optical remote sensing image Active CN112131946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010850663.8A CN112131946B (en) 2020-08-21 2020-08-21 Automatic extraction method for vegetation and water information of optical remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010850663.8A CN112131946B (en) 2020-08-21 2020-08-21 Automatic extraction method for vegetation and water information of optical remote sensing image

Publications (2)

Publication Number Publication Date
CN112131946A true CN112131946A (en) 2020-12-25
CN112131946B CN112131946B (en) 2023-06-23

Family

ID=73851003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010850663.8A Active CN112131946B (en) 2020-08-21 2020-08-21 Automatic extraction method for vegetation and water information of optical remote sensing image

Country Status (1)

Country Link
CN (1) CN112131946B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967308A (en) * 2021-02-26 2021-06-15 湖南南方水利水电勘测设计院有限公司 Amphibious SAR image boundary extraction method and system
CN113324923A (en) * 2021-06-07 2021-08-31 郑州大学 Remote sensing water quality inversion method combining time-space fusion and deep learning
CN114201692A (en) * 2022-02-18 2022-03-18 清华大学 Method and device for collecting crop type samples
CN114235716A (en) * 2021-11-11 2022-03-25 内蒙古师范大学 Water body optical classification and quality control method and computer readable storage medium
CN114841231A (en) * 2022-03-21 2022-08-02 赛思倍斯(绍兴)智能科技有限公司 Crop remote sensing classification method and system
CN115035423A (en) * 2022-01-10 2022-09-09 华南农业大学 Hybrid rice male and female parent identification and extraction method based on unmanned aerial vehicle remote sensing image
CN115561199A (en) * 2022-09-26 2023-01-03 重庆数字城市科技有限公司 Water bloom monitoring method based on satellite remote sensing image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539998A (en) * 2009-04-29 2009-09-23 中国地质科学院矿产资源研究所 Alteration remote sensing abnormity extraction method and system
CN109977801A (en) * 2019-03-08 2019-07-05 中国水利水电科学研究院 A kind of quick Dynamic Extraction method and system of region water body of optical joint and radar
WO2020063461A1 (en) * 2018-09-30 2020-04-02 广州地理研究所 Urban extent extraction method and apparatus based on random forest classification algorithm, and electronic device
AU2020100917A4 (en) * 2020-06-02 2020-07-09 Guizhou Institute Of Pratacultural A Method For Extracting Vegetation Information From Aerial Photographs Of Synergistic Remote Sensing Images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539998A (en) * 2009-04-29 2009-09-23 中国地质科学院矿产资源研究所 Alteration remote sensing abnormity extraction method and system
WO2020063461A1 (en) * 2018-09-30 2020-04-02 广州地理研究所 Urban extent extraction method and apparatus based on random forest classification algorithm, and electronic device
CN109977801A (en) * 2019-03-08 2019-07-05 中国水利水电科学研究院 A kind of quick Dynamic Extraction method and system of region water body of optical joint and radar
AU2020100917A4 (en) * 2020-06-02 2020-07-09 Guizhou Institute Of Pratacultural A Method For Extracting Vegetation Information From Aerial Photographs Of Synergistic Remote Sensing Images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张明月;杨贵军;宋伟东;徐涛;: "遥感组合指数与不同分类技术结合提取农业用地方法", 测绘科学, no. 05 *
温奇;夏列钢;李苓苓;吴玮;: "面向灾害应急土地覆被分类的样本自动选择方法研究", 武汉大学学报(信息科学版), no. 07 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967308B (en) * 2021-02-26 2023-09-19 湖南南方水利水电勘测设计院有限公司 Amphibious boundary extraction method and system for dual-polarized SAR image
CN112967308A (en) * 2021-02-26 2021-06-15 湖南南方水利水电勘测设计院有限公司 Amphibious SAR image boundary extraction method and system
CN113324923B (en) * 2021-06-07 2023-07-07 郑州大学 Remote sensing water quality inversion method combining space-time fusion and deep learning
CN113324923A (en) * 2021-06-07 2021-08-31 郑州大学 Remote sensing water quality inversion method combining time-space fusion and deep learning
CN114235716A (en) * 2021-11-11 2022-03-25 内蒙古师范大学 Water body optical classification and quality control method and computer readable storage medium
CN114235716B (en) * 2021-11-11 2023-09-26 内蒙古师范大学 Water optical classification and quality control method and computer readable storage medium
CN115035423A (en) * 2022-01-10 2022-09-09 华南农业大学 Hybrid rice male and female parent identification and extraction method based on unmanned aerial vehicle remote sensing image
CN115035423B (en) * 2022-01-10 2024-04-16 华南农业大学 Hybrid rice parent and parent identification extraction method based on unmanned aerial vehicle remote sensing image
CN114201692B (en) * 2022-02-18 2022-05-20 清华大学 Method and device for collecting crop type samples
CN114201692A (en) * 2022-02-18 2022-03-18 清华大学 Method and device for collecting crop type samples
CN114841231A (en) * 2022-03-21 2022-08-02 赛思倍斯(绍兴)智能科技有限公司 Crop remote sensing classification method and system
CN114841231B (en) * 2022-03-21 2024-04-09 赛思倍斯(绍兴)智能科技有限公司 Crop remote sensing classification method and system
CN115561199A (en) * 2022-09-26 2023-01-03 重庆数字城市科技有限公司 Water bloom monitoring method based on satellite remote sensing image

Also Published As

Publication number Publication date
CN112131946B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN112131946B (en) Automatic extraction method for vegetation and water information of optical remote sensing image
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN109613513B (en) Optical remote sensing potential landslide automatic identification method considering InSAR deformation factor
US8913826B2 (en) Advanced cloud cover assessment for panchromatic images
CN111898688B (en) Airborne LiDAR data tree classification method based on three-dimensional deep learning
CN107392130A (en) Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN112183209A (en) Regional crop classification method and system based on multi-dimensional feature fusion
US20050114027A1 (en) Cloud shadow detection: VNIR-SWIR
CN113033670A (en) Method for extracting rice planting area based on Sentinel-2A/B data
CN110390255A (en) High-speed rail environmental change monitoring method based on various dimensions feature extraction
CN115170979B (en) Mining area fine land classification method based on multi-source data fusion
CN112669363B (en) Method for measuring three-dimensional green space of urban green space
CN107527037A (en) Blue-green algae identification and analysis system based on unmanned aerial vehicle remote sensing data
US6990410B2 (en) Cloud cover assessment: VNIR-SWIR
CN113033279A (en) Crop fine classification method and system based on multi-source remote sensing image
Aplin et al. Predicting missing field boundaries to increase per-field classification accuracy
CN108898070A (en) A kind of high-spectrum remote-sensing extraction Mikania micrantha device and method based on unmanned aerial vehicle platform
Aubry-Kientz et al. Multisensor data fusion for improved segmentation of individual tree crowns in dense tropical forests
CN110705449A (en) Land utilization change remote sensing monitoring analysis method
CN117409339A (en) Unmanned aerial vehicle crop state visual identification method for air-ground coordination
CN117575953A (en) Detail enhancement method for high-resolution forestry remote sensing image
Zou et al. The fusion of satellite and unmanned aerial vehicle (UAV) imagery for improving classification performance
CN116994029A (en) Fusion classification method and system for multi-source data
CN111368776A (en) High-resolution remote sensing image classification method based on deep ensemble learning
CN113516059B (en) Solid waste identification method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant