CN112131946B - Automatic extraction method for vegetation and water information of optical remote sensing image - Google Patents

Automatic extraction method for vegetation and water information of optical remote sensing image Download PDF

Info

Publication number
CN112131946B
CN112131946B CN202010850663.8A CN202010850663A CN112131946B CN 112131946 B CN112131946 B CN 112131946B CN 202010850663 A CN202010850663 A CN 202010850663A CN 112131946 B CN112131946 B CN 112131946B
Authority
CN
China
Prior art keywords
vegetation
remote sensing
water
optical remote
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010850663.8A
Other languages
Chinese (zh)
Other versions
CN112131946A (en
Inventor
欧阳斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Yinhan Technology Co ltd
Original Assignee
Changsha Yinhan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Yinhan Technology Co ltd filed Critical Changsha Yinhan Technology Co ltd
Priority to CN202010850663.8A priority Critical patent/CN112131946B/en
Publication of CN112131946A publication Critical patent/CN112131946A/en
Application granted granted Critical
Publication of CN112131946B publication Critical patent/CN112131946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an automatic extraction method of vegetation and water information of an optical remote sensing image, which comprises the steps of obtaining optical remote sensing data samples, and randomly extracting 10% of samples in the optical remote sensing data samples to serve as sample subsets; calculating normalized vegetation indexes and normalized water indexes of all samples in the sample subset, acquiring a general characteristic spectrum, performing supervised classification based on a minimum spectrum angle on all samples in the sample subset, and simultaneously recording vegetation types, water types and the sizes of the minimum spectrum angles corresponding to other types; taking the first 50% of the samples with the smallest spectrum angle, performing k-means unsupervised classification based on the smallest Euclidean distance on the first 50% of the samples, obtaining 10 characteristic spectrums of each type, totaling 30 characteristic spectrums, performing supervised classification based on the smallest Euclidean distance on the global image pixel by pixel, and obtaining the extraction result of vegetation and water. No prior sample is needed for supporting all the time, and no manual intervention is needed, so that the full-automatic extraction of vegetation and water body information is realized.

Description

Automatic extraction method for vegetation and water information of optical remote sensing image
Technical Field
The invention relates to the technical field of remote sensing image extraction, in particular to an automatic extraction method of vegetation and water information of an optical remote sensing image.
Background
The supervision and classification method comprises the following steps: the method is a technology for classifying according to a typical sample training method by taking a statistical recognition function as a theoretical basis, namely, according to samples provided by known training areas, by selecting characteristic parameters, obtaining the characteristic parameters as decision rules, and establishing a discriminant function to classify images of images to be classified. The training area is required to be representative and representative. If the criterion meets the classification precision requirement, the criterion is established; otherwise, the decision rule of classification needs to be re-established until the classification accuracy requirement is met. This approach requires a priori training samples and thus does not allow for fully automatic classification. The sample obtained from one image cannot be directly used for classifying, extrapolating and generalizing the other image, so that the extrapolation and generalization capabilities are insufficient;
unsupervised classification method: the classification process does not apply any priori knowledge, but only relies on data (the distribution rule of the spectrum characteristics of the remote sensing image ground object), namely the characteristics of natural clustering, to carry out blind classification; the classification results only distinguish different categories, but cannot determine the attribute of the category, namely: unsupervised classification can only separate samples into several categories, and cannot give a description of the samples; the attribute of the category is determined by visual interpretation or field investigation after the classification is finished. The method has the defects that the classification target is not clear, and the classification result can not meet the actual requirement although automatic classification can be realized;
the random forest classification method comprises the following steps: an algorithm integrating a plurality of trees through the idea of ensemble learning, the basic unit of which is a decision tree, and the essence of which belongs to a large branch of machine learning, namely an ensemble learning (Ensemble Learning) method. Each decision tree is a classifier (assuming classification problems are now addressed), then for an input sample, N trees will have N classification results. And the random forest integrates all classification voting results, and the class with the largest voting frequency is designated as the final output. As with supervised classification, random forests also require a large number of a priori samples and do not allow for true full-automatic classification. The deep learning method comprises the following steps: is a new research direction in the field of Machine Learning (ML), which was introduced to Machine Learning to bring it closer to the original goal-artificial intelligence. Through multi-layer processing, the initial low-layer characteristic representation is gradually converted into the high-layer characteristic representation, and then the complex classification and other learning tasks can be completed by using a simple model. Deep learning can thus be understood as "feature learning" (representation learning) or "representation learning". Theoretically, the larger the sample size is, the more reliable the recognition accuracy is. This approach also requires a priori labeled training samples, and the process of obtaining labeled samples requires a significant amount of labor and time.
Among the above three methods, the supervised classification method and the random forest classification method both require training samples, but require a long training and learning process, and have insufficient extrapolation and generalization capabilities, while the non-supervised classification method does not require training samples, but only distinguishes different categories as a classification result, but cannot determine the attributes of the categories, and the classification result often cannot meet the actual requirements.
Disclosure of Invention
Aiming at the problems, the invention provides an automatic extraction method of vegetation and water information of an optical remote sensing image, which aims to realize high-efficiency and high-precision classification of random samples of any remote sensing image on the premise of no training samples and realize full-automatic vegetation and water information extraction in a real sense.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an automatic extraction method of vegetation and water information of an optical remote sensing image comprises the following steps:
acquiring an optical remote sensing data sample, and randomly extracting 10% of samples in the optical remote sensing data sample as a sample subset;
calculating normalized vegetation indexes and normalized water indexes of all samples in a sample subset, taking an average spectrum of a first 1 permillage sample with the largest normalized vegetation index as a general characteristic spectrum A of a vegetation type, taking an average spectrum of a first 1 permillage sample with the largest normalized water index as a general characteristic spectrum B of a water body type, taking samples with the normalized vegetation index smaller than 0.1, taking a minimum sample set of the normalized water index from the samples with the normalized vegetation index smaller than 0.1, wherein the number of the minimum sample set is 1 permillage of the sample subset, and taking the average spectrum of the minimum sample set as a general characteristic spectrum C of other types;
according to the obtained general characteristic spectrum A, general characteristic spectrum B and general characteristic spectrum C, performing supervision classification based on the minimum spectrum angle on all samples in the sample subset, and simultaneously recording the vegetation type, the water body type and the minimum spectrum angle corresponding to other types; respectively taking the first 50% of samples with the smallest spectrum angles corresponding to the vegetation type, the water body type and other types, and performing k-means unsupervised classification based on the smallest Euclidean distance on the first 50% of samples, wherein 10 characteristic spectrums are obtained in each type, and 30 characteristic spectrums are obtained in total;
and according to the 30 characteristic spectrums, performing pixel-by-pixel supervision classification on the global image IMG based on the minimum Euclidean distance to obtain extraction results of vegetation and water.
In some embodiments, the normalized vegetation index and the normalized water index are calculated as follows:
Figure BDA0002644621170000021
Figure BDA0002644621170000022
wherein, NDVI is normalized vegetation index, NDWI is normalized water index; and Rnir, rred and Rgreen are the spectral reflectivities of a near infrared band, a red band and a green band respectively.
In some embodiments, the minimum spectral angle is calculated as follows:
Figure BDA0002644621170000023
wherein t is a general characteristic spectrum vector, r is a sample vector to be classified, and n is the number of image wave bands.
In some embodiments, each of the 30 characteristic spectra has n image band numbers and one label, and the labels are respectively set as: vegetation is 1, water is 2, and the others are 3.
In some embodiments, after the optical remote sensing data sample is acquired, the method further comprises radiometric calibration processing, wherein the optical remote sensing data is radiometric calibrated according to the type of a sensor for acquiring the optical remote sensing data sample, DN values of the optical remote sensing data sample are converted into radiance, and the radiance is calculated as follows:
L λ =Gain*DN
in the formula, gain is a scaling coefficient, DN is an observation value of a satellite-borne sensor, L λ The converted radiance is used.
In some embodiments, the method further comprises the steps of performing post-radiation calibration treatment, namely firstly, performing unit conversion on the preprocessed optical remote sensing data sample according to FLAASH model input requirements, converting a data storage format of the optical remote sensing data sample into BIL, setting sensor height, pixel calibration, an atmospheric model and an aerosol model according to satellite remote sensing data head file information, and finally, performing the atmospheric correction treatment.
In some embodiments, the method further comprises an orthographic correction process, wherein the orthographic correction process is performed on the optical remote sensing data sample subjected to the atmospheric correction, the inclination correction and the projection difference correction are performed on the image at the same time, and the image is resampled into an orthographic image.
In some embodiments, after the orthographic correction processing, the method further comprises geometric fine correction processing, namely automatically searching for homonymous connection points between the image to be corrected and the reference image by adopting a SIFT operator, automatically screening out rough difference points with larger errors, and finally obtaining a coordinate conversion relation between the image to be corrected and the reference image, so that the geometric positioning precision of the corrected image is within one pixel.
The beneficial effects are as follows: the present invention uses their average spectra to find the 50% samples with the most similar spectral shape at different reflectivity levels by taking the most deterministic samples. And then the samples with larger 50% certainty and richer and comprehensive spectrum information are used for estimating the samples with smaller 50% certainty, and compared with visual interpretation, the classification result with the accuracy of more than 95% can be finally obtained, no prior samples are needed for supporting all the time, no manual intervention is needed, and the full-automatic extraction of vegetation and water body information is realized.
In addition, the invention has self-adaptability, does not depend on a certain satellite or sensor or on imaging time and place or imaging conditions, is a universal method, is efficient and quick, takes a domestic high-resolution one-size wide image with 16 m resolution and 4 wave bands as an example, and can finish the automatic extraction of vegetation and water body information only by less than 10 minutes when 229 km is taken as an image with the area size of 238 km.
The method is suitable for rapidly and accurately extracting the vegetation and water body information in a large range, thereby effectively serving the application fields of large-area flood disaster monitoring, drought monitoring, crop identification, woodland investigation and the like.
Drawings
FIG. 1 is a schematic diagram of a high-resolution one-number WFV false color composite image according to an embodiment of the invention;
FIG. 2 is a schematic illustration of initial samples of vegetation, water and other types of materials disclosed in an embodiment of the present invention;
FIG. 3 is a schematic view of the general characteristic spectrum of vegetation, water and other types of materials disclosed in embodiments of the present invention;
FIG. 4 is a schematic view of the results of preliminary classification of vegetation, water and other types of vegetation, water according to an embodiment of the present invention;
FIG. 5 is a schematic view of the differential signature spectrum of vegetation, water and other types of vegetation disclosed in embodiments of the present invention;
FIG. 6 is a schematic diagram of vegetation, water and other types of final classification results according to embodiments of the present invention;
fig. 7 is a schematic diagram of a second image vegetation and water feature extraction of a sentinel on 2019, 11 months and 19 days in Kaifeng city in Henan province, which is disclosed in the embodiment of the invention;
fig. 8 is a schematic diagram of a WFV image vegetation and water feature extraction of a high score of one WFV image on 2019, 12 and 27 in the hilly county of shandong province;
fig. 9 is a flowchart of an automatic extraction method of vegetation and water information of an optical remote sensing image according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and the detailed description below, in order to make the objects, technical solutions and advantages of the present invention more clear and distinct. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present invention are shown in the accompanying drawings.
As shown in fig. 9, the embodiment provides an automatic extraction method for vegetation and water information of an optical remote sensing image, which includes the following steps:
step 1, obtaining optical remote sensing data samples, and randomly extracting 10% of samples in the optical remote sensing data samples to serve as sample subsets;
in the first step, the optical remote sensing data sample is obtained and downloaded mainly through high-resolution satellite optical remote sensing and digital elevation model data. According to the research requirement, acquiring high-resolution No. 1 satellite remote sensing data of a research region, wherein the cloud coverage rate is lower than 10%, and the high-resolution No. 1 satellite remote sensing data can be acquired through the application of a China resource satellite application center. High score satellite number 1 may select a 16 meter multispectral camera or an 8 meter full color multispectral camera to observe data. The embodiment takes the data collected by a 16-meter multispectral camera as an example.
The website of the China resource satellite application center is http:// www.cresda.com/CN/; the researcher can acquire the optical remote sensing data sample through the website.
Step 1.1, the method also comprises the pretreatment of optical remote sensing data samples, mainly comprising the radiation calibration treatment, the atmosphere correction treatment and the orthographic correction treatment
After the optical remote sensing data sample is obtained, the method further comprises radiometric calibration processing, wherein the optical remote sensing data is radiometric calibrated according to the type of a sensor for obtaining the optical remote sensing data sample, DN values of the optical remote sensing data sample are converted into radiance, and the radiance is calculated as follows:
L λ =Gain*DN
in the formula, gain is a scaling coefficient, DN is an observation value of a satellite-borne sensor, L λ The converted radiance is used.
Firstly, carrying out unit conversion on the preprocessed optical remote sensing data sample according to the FLAASH model input requirement, converting the data storage format of the optical remote sensing data sample into BIL, setting the sensor height, pixel arrival correction, an atmospheric model and an aerosol model according to the satellite remote sensing data head file information, and finally, executing the atmospheric correction. The atmospheric correction treatment aims at eliminating the influence of absorption and scattering equivalent of the atmosphere on the surface reflectivity, eliminating the radiation error caused by the influence of the atmosphere and inverting the reflectivity of the ground object. In this embodiment, ENVI/FLAASH is used to perform the atmospheric correction processing on the high-fraction 1 data.
And after the atmospheric correction processing, the method also comprises an orthographic correction processing, namely, carrying out geometric distortion correction on the optical remote sensing data sample subjected to the atmospheric correction, simultaneously carrying out inclination correction and projection difference correction on the image, and resampling the image into an orthographic image. The orthographic correction is an image processing method for eliminating image deformation caused by topographic relief, high-resolution satellite data comprise RPC files, and an orthographic correction tool based on RPC in ENVI software can be used for orthographic correction.
After the orthographic correction processing, the method further comprises geometric fine correction processing, namely automatically searching for homonymous connection points between the image to be corrected and the reference image by adopting a SIFT operator, automatically screening out rough difference points with larger errors, and finally obtaining a coordinate conversion relation between the image to be corrected and the reference image, so that the geometric positioning precision of the corrected image is within one pixel, and the requirement of multi-time phase dynamic monitoring on the positioning precision is met. Geometry fine correction refers to eliminating geometric distortion in an image and generating a new image meeting certain map projection or graphic expression requirements.
The preprocessed high-resolution one-size image is shown in fig. 1 (3000 pixels, i.e., 48 km by 48 km sub-image blocks are cut out as sample data in this embodiment).
For N rows and M columns of images which are preprocessed, a mode of combining uniform sampling and random sampling is used for obtaining random samples. Specifically, the whole scene image is divided into a section every 10 pixels, one pixel is randomly taken out of every 10 pixels and added into a random sample set, and samples with the pixel number accounting for about 10% of the whole image are obtained.
Step 2, calculating normalized vegetation indexes and normalized water indexes of all samples in a sample subset, taking an average spectrum of a first 1 permillage sample with the largest normalized vegetation index as a general characteristic spectrum A of a vegetation type, taking an average spectrum of a first 1 permillage sample with the largest normalized water index as a general characteristic spectrum B of a water body type, taking a sample with the normalized vegetation index smaller than 0.1, taking a minimum sample set of the normalized water index from the samples with the normalized vegetation index smaller than 0.1, wherein the number of the minimum sample set is 1 permillage of the sample subset, and taking the average spectrum of the minimum sample set as a general characteristic spectrum C of other types; (the minimum sample set takes the first 1%o of the sample subset instead of the first 1%o of the samples with normalized vegetation index less than 0.1, e.g. 10000 samples with normalized vegetation index less than 0.1 now exist, and 50000 samples in the sample subset, 50 samples from 10000 samples need to be taken as the first 1%o instead of 10 samples, because 1%o of 50000 is 50.)
Since chlorophyll contained in the green vegetation has high reflection characteristics for near infrared band radiation, and water has high absorption and low reflection characteristics for near infrared band radiation. In general, the higher the normalized vegetation index NDVI, the more obvious the vegetation characteristics, and the higher the normalized vegetation index NDWI, the more obvious the water characteristics.
The normalized vegetation index and the normalized water index are calculated as follows:
Figure BDA0002644621170000061
Figure BDA0002644621170000062
wherein, NDVI is normalized vegetation index, NDWI is normalized water index; and Rnir, rred and Rgreen are the spectral reflectivities of a near infrared band, a red band and a green band respectively.
Sequencing the NDVI from large to small, taking the first 1 permillage sample with the maximum NDVI, and calculating an average spectrum as a general characteristic spectrum of vegetation types; sequencing NDWI from large to small, taking the first 1%sample with the maximum NDWI, and calculating an average spectrum as a general characteristic spectrum of the water body type; the first 1%o (total number of sample subsets) of the samples with NDVI <0.1 and minimum NDWI were calculated as average spectra of the samples, as other types of general characteristic spectra. Three types of initial samples are shown in fig. 2, and three types of general characteristic spectra are shown in fig. 3. It can be seen from the figure that the three spectral curves are morphologically distinct, typically representative, and adaptive because they come from the image itself.
Step 3, according to the obtained general characteristic spectrum A, general characteristic spectrum B and general characteristic spectrum C, performing supervision classification based on the minimum spectrum angle on all samples in the sample subset, and simultaneously recording the vegetation type, the water body type and the minimum spectrum angle corresponding to other types; respectively taking the first 50% of samples with the smallest spectrum angles corresponding to the vegetation type, the water body type and other types, and performing k-means unsupervised classification based on the smallest Euclidean distance on the first 50% of samples, wherein 10 characteristic spectrums are obtained in each type, and 30 characteristic spectrums are obtained in total; among the 30 characteristic spectrums, each characteristic spectrum has N image wave band numbers (N is the wave band number, for example, the high-resolution one-number WFV image is 4 wave bands, the sentinel second image is 10 wave bands) and a label, and the label is set as respectively: vegetation is 1, water is 2, and the others are 3.
The calculation formula of the minimum spectrum angle is as follows:
Figure BDA0002644621170000063
wherein t is a general characteristic spectrum vector, r is a sample vector to be classified, and n is the number of image wave bands.
The three general characteristic spectra were used to perform a supervised classification based on minimum spectral angles on a random sample set while recording the minimum spectral angles in radians. The smaller the spectral angle, the more similar the morphology of the spectral curve, and the use of spectral angle classification tends to find numerous samples that are similar in morphology at different reflectance levels. For vegetation, water and other three types, the spectrum angles are respectively sequenced from small to large, the first 50% of samples are taken for K-means unsupervised classification based on the minimum Euclidean distance, the primary classification result is shown in figure 4, and the differentiated characteristic spectrum is shown in figure 5.
The K-means unsupervised classification algorithm is as follows:
k samples (k=10 in this embodiment) were randomly selected as the initial cluster centers. The distance between each sample and the respective seed cluster center is then calculated, assigning each sample to its nearest cluster center. The cluster centers and the objects assigned to them represent a cluster. The average spectrum of the existing samples in each cluster is calculated as the new cluster center. The iteration is looped in this way until any one of the following conditions is met:
1) No (or a minimum number of) objects, 1% of the total number of samples taken by the present invention, are reassigned to different clusters.
2) The number of loop iterations reaches a certain number (100 iterations are taken by the invention).
And 4, performing pixel-by-pixel supervision and classification on the global image IMG based on the minimum Euclidean distance according to the 30 characteristic spectrums obtained in the step, and obtaining an extraction result of vegetation and water. The classification results of vegetation, water and other three types are obtained, and the self-adaptive full-automatic extraction of the vegetation and the water is realized. The extraction results are shown in FIG. 6.
The calculation formula of the Euclidean distance is as follows:
Figure BDA0002644621170000071
instance verification
In order to test the vegetation and water information extraction effect of the invention, collecting a sentinel second image of 2019, 11 month and 19 days in Kaifeng city of Henan province, wherein the resolution is 10 meters; the resolution of the high-resolution one-grade WFV image of 12 months and 27 days in 2019 of Shandong Leling county is 16 meters. The method provided by the invention classifies images of two scenes at different time and places and different satellite sensors, and compares the images with the accuracy of the results of manual visual interpretation, and the results show that the overall classification accuracy of the two scenes is more than 95%, and the specific situations are shown in figures 7 and 8 respectively.
The present invention uses their average spectra to find the 50% samples with the most similar spectral shape at different reflectivity levels by taking the most deterministic samples. And then the samples with larger 50% certainty and richer and comprehensive spectrum information are used for estimating the samples with smaller 50% certainty, and compared with visual interpretation, the classification result with the accuracy of more than 95% can be finally obtained, no prior samples are needed for supporting all the time, no manual intervention is needed, and the full-automatic extraction of vegetation and water body information is realized.
In addition, the invention has self-adaptability, does not depend on a certain satellite or sensor or on imaging time and place or imaging conditions, is a universal method, is efficient and quick, takes a domestic high-resolution one-size wide image with 16 m resolution and 4 wave bands as an example, and can finish the automatic extraction of vegetation and water body information only by less than 10 minutes when 229 km is taken as an image with the area size of 238 km.
The method is suitable for rapidly and accurately extracting the vegetation and water body information in a large range, thereby effectively serving the application fields of large-area flood disaster monitoring, drought monitoring, crop identification, woodland investigation and the like.
The above embodiments are only for illustrating the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the content of the present invention and implement the same, and are not intended to limit the scope of the present invention. All equivalent changes or modifications made in accordance with the essence of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. An automatic extraction method of vegetation and water information of an optical remote sensing image is characterized by comprising the following steps:
acquiring an optical remote sensing data sample, and randomly extracting 10% of samples in the optical remote sensing data sample as a sample subset;
calculating normalized vegetation indexes and normalized water indexes of all samples in the sample subset, taking an average spectrum of a first 1 permillage sample with the largest normalized vegetation index as a general characteristic spectrum A of a vegetation type, taking an average spectrum of a first 1 permillage sample with the largest normalized water index as a general characteristic spectrum B of a water type, taking a sample with the normalized vegetation index smaller than 0.1, taking a minimum sample set of the normalized water index from the samples with the normalized vegetation index smaller than 0.1, wherein the number of the minimum sample set is 1 permillage of the sample subset, and taking the average spectrum of the minimum sample set as a general characteristic spectrum C of other types;
according to the obtained general characteristic spectrum A, general characteristic spectrum B and general characteristic spectrum C, performing supervision classification based on the minimum spectrum angle on all samples in the sample subset, and simultaneously recording the minimum spectrum angle corresponding to the vegetation type, the water body type and other types; respectively taking the first 50% of samples with the smallest spectrum angles corresponding to the vegetation type, the water body type and other types, and performing k-means non-supervision classification based on the smallest Euclidean distance on the first 50% of samples, wherein 10 characteristic spectrums are obtained in each type, and 30 characteristic spectrums are obtained in total;
and according to the 30 characteristic spectrums, performing pixel-by-pixel supervision and classification on the global image IMG based on the minimum Euclidean distance to obtain extraction results of vegetation and water.
2. The method for automatically extracting vegetation and water information from an optical remote sensing image according to claim 1, wherein the calculation formula of the normalized vegetation index and the normalized water index is as follows:
Figure FDA0002644621160000011
Figure FDA0002644621160000012
wherein, NDVI is normalized vegetation index, NDWI is normalized water index; and Rnir, rred and Rgreen are the spectral reflectivities of a near infrared band, a red band and a green band respectively.
3. The method for automatically extracting vegetation and water information from an optical remote sensing image according to claim 1, wherein the calculation formula of the minimum spectrum angle is as follows:
Figure FDA0002644621160000013
wherein t is a general characteristic spectrum vector, r is a sample vector to be classified, and n is the number of image wave bands.
4. The method for automatically extracting vegetation and water information from an optical remote sensing image according to claim 1, wherein each of the 30 characteristic spectrums has n image band numbers and one label, and the labels are respectively set as follows: vegetation is 1, water is 2, and the others are 3.
5. The method for automatically extracting vegetation and water information from an optical remote sensing image according to claim 1, further comprising a radiometric calibration process, wherein the radiometric calibration process is performed on optical remote sensing data according to the type of the sensor for acquiring the optical remote sensing data, DN values of the optical remote sensing data are converted into radiometric brightness, and the radiometric brightness is calculated as follows:
L λ =Gain*DN
in the formula, gain is a scaling coefficient, DN is an observation value of a satellite-borne sensor, L λ The converted radiance is used.
6. The method for automatically extracting vegetation and water information from an optical remote sensing image according to claim 5, wherein the method further comprises performing an atmospheric correction process after the radiation calibration, wherein the method comprises performing a unit conversion on the preprocessed optical remote sensing data sample according to FLAASH model input requirements, converting a data storage format of the optical remote sensing data sample into BIL, setting a sensor height, a pixel arrival correction, an atmospheric model and an aerosol model according to satellite remote sensing data header file information, and performing the atmospheric correction process.
7. The method of claim 6, further comprising an orthographic correction process, after the atmospheric correction process, of correcting geometric distortion of the atmospheric corrected optical remote sensing data sample, correcting tilt and projection differences of the image, and resampling the image to an orthographic image.
8. The method for automatically extracting vegetation and water information from an optical remote sensing image according to claim 7, further comprising geometric fine correction processing, wherein the geometric fine correction processing comprises the steps of automatically searching for homonymous connection points between an image to be corrected and a reference image by using a SIFT operator, automatically screening out rough difference points with larger errors, and finally obtaining a coordinate conversion relation between the image to be corrected and the reference image, so that geometric positioning accuracy of the corrected image is within one pixel.
CN202010850663.8A 2020-08-21 2020-08-21 Automatic extraction method for vegetation and water information of optical remote sensing image Active CN112131946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010850663.8A CN112131946B (en) 2020-08-21 2020-08-21 Automatic extraction method for vegetation and water information of optical remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010850663.8A CN112131946B (en) 2020-08-21 2020-08-21 Automatic extraction method for vegetation and water information of optical remote sensing image

Publications (2)

Publication Number Publication Date
CN112131946A CN112131946A (en) 2020-12-25
CN112131946B true CN112131946B (en) 2023-06-23

Family

ID=73851003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010850663.8A Active CN112131946B (en) 2020-08-21 2020-08-21 Automatic extraction method for vegetation and water information of optical remote sensing image

Country Status (1)

Country Link
CN (1) CN112131946B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967308B (en) * 2021-02-26 2023-09-19 湖南南方水利水电勘测设计院有限公司 Amphibious boundary extraction method and system for dual-polarized SAR image
CN113324923B (en) * 2021-06-07 2023-07-07 郑州大学 Remote sensing water quality inversion method combining space-time fusion and deep learning
CN114235716B (en) * 2021-11-11 2023-09-26 内蒙古师范大学 Water optical classification and quality control method and computer readable storage medium
CN115035423B (en) * 2022-01-10 2024-04-16 华南农业大学 Hybrid rice parent and parent identification extraction method based on unmanned aerial vehicle remote sensing image
CN114201692B (en) * 2022-02-18 2022-05-20 清华大学 Method and device for collecting crop type samples
CN114841231B (en) * 2022-03-21 2024-04-09 赛思倍斯(绍兴)智能科技有限公司 Crop remote sensing classification method and system
CN115561199A (en) * 2022-09-26 2023-01-03 重庆数字城市科技有限公司 Water bloom monitoring method based on satellite remote sensing image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539998A (en) * 2009-04-29 2009-09-23 中国地质科学院矿产资源研究所 Alteration remote sensing abnormity extraction method and system
CN109977801A (en) * 2019-03-08 2019-07-05 中国水利水电科学研究院 A kind of quick Dynamic Extraction method and system of region water body of optical joint and radar
WO2020063461A1 (en) * 2018-09-30 2020-04-02 广州地理研究所 Urban extent extraction method and apparatus based on random forest classification algorithm, and electronic device
AU2020100917A4 (en) * 2020-06-02 2020-07-09 Guizhou Institute Of Pratacultural A Method For Extracting Vegetation Information From Aerial Photographs Of Synergistic Remote Sensing Images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539998A (en) * 2009-04-29 2009-09-23 中国地质科学院矿产资源研究所 Alteration remote sensing abnormity extraction method and system
WO2020063461A1 (en) * 2018-09-30 2020-04-02 广州地理研究所 Urban extent extraction method and apparatus based on random forest classification algorithm, and electronic device
CN109977801A (en) * 2019-03-08 2019-07-05 中国水利水电科学研究院 A kind of quick Dynamic Extraction method and system of region water body of optical joint and radar
AU2020100917A4 (en) * 2020-06-02 2020-07-09 Guizhou Institute Of Pratacultural A Method For Extracting Vegetation Information From Aerial Photographs Of Synergistic Remote Sensing Images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
遥感组合指数与不同分类技术结合提取农业用地方法;张明月;杨贵军;宋伟东;徐涛;;测绘科学(第05期);全文 *
面向灾害应急土地覆被分类的样本自动选择方法研究;温奇;夏列钢;李苓苓;吴玮;;武汉大学学报(信息科学版)(第07期);全文 *

Also Published As

Publication number Publication date
CN112131946A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN112131946B (en) Automatic extraction method for vegetation and water information of optical remote sensing image
CN109613513B (en) Optical remote sensing potential landslide automatic identification method considering InSAR deformation factor
CN110287869A (en) High-resolution remote sensing image Crop classification method based on deep learning
CN109063754B (en) Remote sensing image multi-feature joint classification method based on OpenStreetMap
CN113591766B (en) Multi-source remote sensing tree species identification method for unmanned aerial vehicle
CN107392130A (en) Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN113029971B (en) Crop canopy nitrogen monitoring method and system
CN108710864B (en) Winter wheat remote sensing extraction method based on multi-dimensional identification and image noise reduction processing
CN113033670A (en) Method for extracting rice planting area based on Sentinel-2A/B data
CN115170979B (en) Mining area fine land classification method based on multi-source data fusion
CN114049562B (en) Method for fusing and correcting land cover data
CN108898070A (en) A kind of high-spectrum remote-sensing extraction Mikania micrantha device and method based on unmanned aerial vehicle platform
CN110705449A (en) Land utilization change remote sensing monitoring analysis method
CN113033279A (en) Crop fine classification method and system based on multi-source remote sensing image
CN112669363B (en) Method for measuring three-dimensional green space of urban green space
Aubry-Kientz et al. Multisensor data fusion for improved segmentation of individual tree crowns in dense tropical forests
CN117409339A (en) Unmanned aerial vehicle crop state visual identification method for air-ground coordination
CN114778483A (en) Method for correcting terrain shadow of remote sensing image near-infrared wave band for monitoring mountainous region
CN117197668A (en) Crop lodging level prediction method and system based on deep learning
Bilodeau et al. Identifying hair fescue in wild blueberry fields using drone images for precise application of granular herbicide
Zou et al. The fusion of satellite and unmanned aerial vehicle (UAV) imagery for improving classification performance
CN111368776A (en) High-resolution remote sensing image classification method based on deep ensemble learning
Fisette et al. Methodology for a Canadian agricultural land cover classification
Hájek Object-oriented classification of remote sensing data for the identification of tree species composition
CN113516059B (en) Solid waste identification method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant