CN109711446A - A kind of terrain classification method and device based on multispectral image and SAR image - Google Patents

A kind of terrain classification method and device based on multispectral image and SAR image Download PDF

Info

Publication number
CN109711446A
CN109711446A CN201811555032.2A CN201811555032A CN109711446A CN 109711446 A CN109711446 A CN 109711446A CN 201811555032 A CN201811555032 A CN 201811555032A CN 109711446 A CN109711446 A CN 109711446A
Authority
CN
China
Prior art keywords
image
sar image
sar
time series
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811555032.2A
Other languages
Chinese (zh)
Other versions
CN109711446B (en
Inventor
孙鹭怡
刘军
陈劲松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201811555032.2A priority Critical patent/CN109711446B/en
Publication of CN109711446A publication Critical patent/CN109711446A/en
Application granted granted Critical
Publication of CN109711446B publication Critical patent/CN109711446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The present invention relates to terrain classification fields, and in particular to a kind of terrain classification method and device based on multispectral image and SAR image, this method and device obtain the multispectral image of predeterminable area, and carry out multispectral image feature extraction to multispectral image;The time series SAR image of predeterminable area is obtained, and time series SAR image feature is carried out to time series SAR image and is extracted;Feature-based fusion is carried out to multispectral image feature and time series SAR image feature, obtains terrain classification result.The advantage that this method and device utilize synthetic aperture radar SAR round-the-clock, all weather operations, revisiting period short, obtains long-term sequence SAR image, increases the characteristic dimension of input;Feature-based fusion is carried out to multispectral and SAR image, while abundant benefit spectral information, ground object structure, texture and Electromagnetic Scattering Characteristics that binding time sequence SAR image is reflected assist atural object to interpret.

Description

A kind of terrain classification method and device based on multispectral image and SAR image
Technical field
The present invention relates to terrain classification fields, in particular to a kind of atural object based on multispectral image and SAR image Classification method and device.
Background technique
In recent years, with the continuous development of satellite and imaging technique, optical remote sensing image and satellite-borne synthetic aperture radar (SAR) the acquisition channel of image is more and more, becomes the significant data source of a wide range of earth observation.Optical image is abundant due to it Spectral information, the identification and detection of target are relatively easy to.But traditional atural object classification method is only with optical image, by In the influence of weather, it is difficult to form time series data collection, greatly limit the dimension of input feature vector.Satellite-borne synthetic aperture radar (SAR) it is used as a kind of active microwave remote sensing technology, the advantage with round-the-clock, all weather operations, penetration capacity is strong and texture is believed Breath is abundant, but is influenced by factors such as the intrinsic coherent spots of geometric distortion and SAR imaging system, and the interpretation to atural object is limited It is not high individually to carry out terrain classification precision using radar image for ability.Therefore, how optics and SAR remote sensing image are melted It closes, realizes that multi-source data message complementary sense and comprehensive utilization have become a hot topic of research to improve the precision of terrain classification.
Prior art a part only with visual remote sensing data is influenced that long-term sequence feature can not be obtained by cloud desk, Limit nicety of grading;In the research with SAR visual fusion, Speckle reduction effect of the prior art to SAR data itself It is undesirable, due to being filtered using home town ruling window, the statistical property of neighboring pixel is not accounted for, the same of noise is being inhibited Shi Jiyi obscures strong scatterer and periphery Low coherence region.
Summary of the invention
The terrain classification method and device based on multispectral image and SAR image that the embodiment of the invention provides a kind of, with At least solve the poor technical problem of existing terrain classification classification precision.
An embodiment according to the present invention provides a kind of terrain classification method based on multispectral image and SAR image, The following steps are included:
S101: the multispectral image of predeterminable area is obtained, and multispectral image feature extraction is carried out to multispectral image;
S102: the time series SAR image of predeterminable area is obtained, and time series SAR is carried out to time series SAR image Image feature extracts;
S103: feature-based fusion is carried out to multispectral image feature and time series SAR image feature, obtains terrain classification As a result.
Further, method further include:
S104: accuracy evaluation is carried out to the terrain classification result of acquisition.
Further, step S101 includes:
It chooses and covers same research multispectral image of area's cloud desk less than 10%;Multispectral image is pre-processed, so The vegetation index NDVI and water body index NDWI in multispectral image are calculated using Top Of Atmosphere TOA reflectivity afterwards;
Vegetation index (NDVI) calculation formula is as follows:
Water body index (NDWI) calculation formula is as follows:
Wherein, BandGreen、BandNIR、BandRedRespectively green, infrared, red spectral band the big pneumatic jack of multispectral image Portion's reflectivity.
Further, step S102 includes:
The time series SAR image for choosing the same research area of covering, pre-processes time series SAR image: taking Adaptive filter method based on statistics homogeneity set of pixels calculates interference coherence factor, VV/VH POLARIZATION CHANNEL amplitude dispersion is estimated Meter, gray level co-occurrence matrixes GLCM texture feature extraction.
Further, the adaptive filter method for being taken based on statistics homogeneity set of pixels calculates interference coherence factor and includes:
According to the statistical distribution of each pixel, choosing with the pixel collection being distributed is statistics homogeneity set of pixels;Haplopia is multiple A large amount of superimposed scattering strength Z (P) of distributed diffusion body are that a multiple Gauss becomes at random in the SAR image of data SLC format Amount, variance areAccording to SAR image distributed diffusion body statistical theory, the amplitude of the SLC image of haplopia SARRayleigh distributed, and it is expected to be respectively as follows: with variance
One measurement standard of SAR smudges noise level, the i.e. coefficient of variation are defined as
For the stochastic variable A (P) of rayleigh distributed, formula (3) are substituted into (4), coefficient of variation CVAIt is constant:
For multiple view picture, number is regarded as L, then above formula becomes:
Therefore variance may be expressed as: with desired relationship
Var (A (P))=[CVA*E(A(P))]2=[CVA*μ(P)]2(7);
Wherein μ (P) indicates the expectation E (A (P)) of stochastic variable A (P);
Assuming that there is N phase SAR image, for any pixel P, amplitude forms a N-dimensional vector along time shaft:
A (P)=[A1(P),A2(P),…AN(P)]T(8);
The sample point estimation of the expectation E (P) of A (P) can be expressed asAccording to Central-limit theorem,As the increase of sample number N gradually approaches Gaussian Profile;Assuming that N is sufficiently large, Gauss assumes to set up,Distribution may be expressed as:
It is hereby achieved thatConfidence interval:
WhereinIt is just being distributed very much for standardQuantile;Formula (6), (7) are substituted into (10), are had:
The statistics homogeneity set of pixels about P is obtained by formula (11);
Traditional Lee filtering is modified, as unit of the statistics homogeneity set of pixels of acquisition, backscatter intensity is carried out Speckle reduction obtains filtered N width SAR intensity map;
The interference coherence factor calculation formula of one pixel is as follows:
Wherein S1(i) and S2(i) complex values of principal and subordinate's image respective pixel i for interference are represented, K is represented where pixel i Statistics homogeneity set of pixels in pixel number;The corresponding coherence factor of each sample is calculated using formula (12), obtains phase responsibility One group of several estimation γ12,…,γR, using the coherence factor of these Sampling Estimations and the coherence factor γ of original estimation, lead to Cross the unbiased esti-mator that following formula obtains the coherence factor:
Further, obtained by formula (11) include: about the statistics homogeneity set of pixels of P
Search window is set first, by the sample average of all pixels in search rangeAs the estimated value of μ (P), Initial confidence interval is calculated with α=50%, the pixel between fallen with original area is temporarily determined as homologous pints;
Then it is iterated, with initial homogeneity point set ΩinitFor new range computationAs μ (P) estimating newly Evaluation calculates new confidence interval with α=5%, is carried out again with updated confidence interval to the pixel in initial homogeneity point set Secondary judgement, falling between new district is homologous pints;
After obtaining new homogeneity point set, the connectivity of pixel is assessed, only retain directly or indirectly with reference point P phase Point even, to obtain the statistics homogeneity set of pixels about P.
Further, calculating the corresponding coherence factor of each sample using formula (12) includes:
Deviation caused by landform phase removal interferometric phase first using the simulation of reference number elevation;
Secondly coherence factor calculating is carried out as unit of the homogeneity collection where each pixel, is caused to remove image quality Estimated bias;
Finally to all samples in a homogeneity set of pixels, using the method for sampling with replacement Bootstrap, repeatedly sample R It is secondary, obtain one group of estimation γ of coherence factor12,…,γR
Further, VV/VH POLARIZATION CHANNEL amplitude dispersion estimation include:
Using filtered time series amplitude, the amplitude dispersion of VV/VH POLARIZATION CHANNEL is respectively calculated, to appoint One pixel:
Wherein amp dispersion is the amplitude of backscatter intensity conversion, denominator mean (amp) and molecule std It (amp) is the amplitude mean value and standard deviation calculated along time series respectively;
Gray level co-occurrence matrixes GLCM texture feature extraction includes:
Using original haplopia SAR image, the GLCM variance of two POLARIZATION CHANNELs of VV/VH is calculated separately, to time series chart As being calculated;
Step S103 includes:
Using random forests algorithm, multispectral image feature and time series SAR image feature input classifier are carried out Fusion;
Step S104 includes:
The confusion matrix that classification results are calculated according to the sample point of acquisition calculates user's essence of each classification extraction accordingly Degree generates precision and overall classification accuracy, Kappa coefficient.
According to another embodiment of the present invention, it provides a kind of based on the terrain classification of multispectral image and SAR image dress It sets, comprising:
Multispectral image feature extraction unit, for obtaining the multispectral image of predeterminable area, and to multispectral image into Row multispectral image feature extraction;
Time series SAR image feature extraction unit, for obtaining the time series SAR image of predeterminable area, and clock synchronization Between sequence SAR image carry out time series SAR image feature extract;
Feature-based fusion unit melts for carrying out feature level to multispectral image feature and time series SAR image feature It closes, obtains terrain classification result.
Further, device further include:
Accuracy evaluation unit, for carrying out accuracy evaluation to the terrain classification result of acquisition.
The terrain classification method and device based on multispectral image and SAR image in the embodiment of the present invention, utilizes synthesis The short advantage of aperture radar (SAR) round-the-clock, all weather operations, revisiting period obtains long-term sequence SAR image, increases The characteristic dimension of input;Feature-based fusion is carried out to multispectral and SAR image, while abundant benefit spectral information, in conjunction with Ground object structure, texture and Electromagnetic Scattering Characteristics that time series SAR image is reflected assist atural object to interpret.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is that the present invention is based on the flow charts of the terrain classification method of multispectral image and SAR image;
Fig. 2 is that the present invention is based on the preferred flow charts of the terrain classification method of multispectral image and SAR image;
Fig. 3 is that the present invention is based on the module maps of multispectral image and the terrain classification device of SAR image;
Fig. 4 is that the present invention is based on the preferred module figures of multispectral image and the terrain classification device of SAR image;
Fig. 5 is that the present invention is based on maximum likelihood method atural object is used in the terrain classification method of multispectral image and SAR image Classification results analogous diagram;
Fig. 6 is that the present invention is based on algorithm of support vector machine is used in the terrain classification method of multispectral image and SAR image Terrain classification result analogous diagram;
Fig. 7 be the present invention is based in the terrain classification method of multispectral image and SAR image using random forests algorithm Object classification results analogous diagram.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work It encloses.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product Or other step or units that equipment is intrinsic.
Commonly used with machine learning algorithm, the land cover classification precision based on remote sensing image, which has, significantly to be mentioned Height, and the validity of input feature vector and diversity become one of the main factor for influencing machine learning classification precision.Classifying In terms of algorithm, common method has nearest neighbor algorithm, Bayesian Method, decision tree, support vector machines (SVM), neural network algorithm, with And random forest (RF) algorithm etc..The one kind of random forests algorithm as machine learning algorithm, in the atural object based on remote sensing image Classify with the advantage that speed is fast, nicety of grading is high, robustness is high.Some researches show that random forests algorithm can handle higher-dimension Data, for input characteristic dimension there is no limit, be advantageously implemented the feature-based fusion of multi-source data.
Random forests algorithm is substantially the combination to bagging method (Bagging) and decision Tree algorithms.Firstly, from original sample It is concentrated use in bootstrapping (bootstrap) method and randomly selects n training sample, carry out k wheel altogether and extract, obtain k training set;Its It is secondary, for k training set, k decision-tree model of training: for single decision-tree model, it is assumed that the number of training sample feature For n, then every time division when best feature can be selected to be divided than/gini index according to information gain/information gain; Each tree all and so on divisions are gone down, until all training examples of the node belong to same class;Certainly by more of generation Plan tree forms random forest.Finally, by voting generation classification results.The introducing of randomness, so that random forest is not easy Over-fitting, and there is good noise resisting ability, therefore the algorithm shows higher stability.
It is the main means for improving input feature vector dimension using multisensor source, the remote sensing image of time series.So And, it is seen that light remote sensing image is influenced to be difficult to obtain long-term sequence image by cloud desk, and SAR image has round-the-clock, round-the-clock The advantages of work, it is possible to provide long-term sequence image makes up this deficiency, and spectrum and SAR image are due to sensor and imager The difference of reason, there is complementation in terms of atural object interpretation, and the present invention is by melting to multispectral and SAR image feature level It closes, nicety of grading is further increased based on machine learning algorithm.
Embodiment 1
An embodiment according to the present invention provides a kind of terrain classification method based on multispectral image and SAR image, ginseng See Fig. 1, comprising the following steps:
S101: the multispectral image of predeterminable area is obtained, and multispectral image feature extraction is carried out to multispectral image;
S102: the time series SAR image of predeterminable area is obtained, and time series SAR is carried out to time series SAR image Image feature extracts;
S103: feature-based fusion is carried out to multispectral image feature and time series SAR image feature, obtains terrain classification As a result.
The terrain classification method based on multispectral image and SAR image in the embodiment of the present invention, utilizes synthetic aperture thunder The advantage short up to (SAR) round-the-clock, all weather operations, revisiting period obtains long-term sequence SAR image, increases input Characteristic dimension;Feature-based fusion, while abundant benefit spectral information, binding time sequence are carried out to multispectral and SAR image Ground object structure, texture and the Electromagnetic Scattering Characteristics that SAR image is reflected are arranged to assist atural object to interpret.
In as a preferred technical scheme, referring to fig. 2, method further include:
S104: accuracy evaluation is carried out to the terrain classification result of acquisition.
Below with specific embodiment, the present invention is described in detail.
The method of the invention advantage short using synthetic aperture radar (SAR) round-the-clock, all weather operations, revisiting period, Long-term sequence SAR image is obtained, the characteristic dimension of input is increased;By random forests algorithm, to multispectral and SAR image Feature-based fusion is carried out, while abundant benefit spectral information, ground object structure that binding time sequence SAR image is reflected, Texture and Electromagnetic Scattering Characteristics assist atural object to interpret;And the adaptive filter method based on statistics homogeneity set of pixels is used, compared with Good inhibits speckle noise, while increasing the new features such as the estimation of amplitude dispersion, statistics homogeneity pixel segmentation, in conjunction with SAR Dual polarization channel, significantly increase input feature vector dimension, further improve terrain classification precision.It is divided into: multispectral image Feature extraction;SAR image feature extracts;Feature-based fusion and classification;Accuracy evaluation.
1. multispectral image feature extraction
Firstly, multispectral image of the cloud desk less than 10% for choosing the same research area of covering carries out subsequent analysis.To mostly light Spectrum image is pre-processed, including resampling, ortho-rectification, radiant correction, cloud removing.Then Top Of Atmosphere (TOA) is used Reflectivity calculates vegetation index (NDVI) and water body index (NDWI);
Vegetation index (NDVI) calculation formula is as follows:
Water body index (NDWI) calculation formula is as follows:
Wherein, BandGreen、BandNIR、BandRedRespectively green, infrared, red spectral band the big pneumatic jack of multispectral image Portion's reflectivity.
The multispectral image feature that this step uses includes: all spectral bands, vegetation index, the water of more phase optical images Body index.
2.SAR image feature extracts
The time series SAR image in the same research area of covering is chosen, it is preferential to select multipolarization image.SAR data is carried out Pretreatment, comprising:
2.1 subpixel registration;
2.2 interference patterns generate;
The removal of 2.3 landform phases;
2.4 are taken based on the adaptive filter method of statistics homogeneity set of pixels, including the following steps:
2.4.1 the statistics homogeneity set of pixels based on complex covariance matrix determines
Firstly, the pixel chosen with distribution becomes statistics homogeneity set of pixels according to the statistical distribution of each pixel.SAR Pixel, that is, resolution cell backscatter intensity in image, be belong to this resolution cell a large amount of scatterers it is backward The superposition of scattering strength.For distributed diffusion body most generally existing in natural scene, haplopia complex data (SLC) format SAR image in a large amount of superimposed scattering strength Z (P) of distributed diffusion body be a multiple Gauss stochastic variable, variance is According to SAR image distributed diffusion body statistical theory, the amplitude of the SLC image of haplopia SAR Obey Rayleigh Distribution, and it is expected to be respectively as follows: with variance
One measurement standard of SAR smudges noise level, the i.e. coefficient of variation are defined as
For the stochastic variable A (P) of rayleigh distributed, formula (3) are substituted into (4), coefficient of variation CVAIt is constant:
For multiple view picture, number is regarded as L, then above formula becomes:
Therefore variance may be expressed as: with desired relationship
Var (A (P))=[CVA*E(A(P))]2=[CVA*μ(P)]2(7);
Wherein μ (P) indicates the expectation E (A (P)) of stochastic variable A (P);
Assuming that there is N phase SAR image, for any pixel P, amplitude forms a N-dimensional vector along time shaft:
A (P)=[A1(P),A2(P),…AN(P)]T(8);
The sample point estimation of the expectation E (P) of A (P) can be expressed asAccording to Central-limit theorem,As the increase of sample number (time series image) N gradually approaches Gaussian Profile.Assuming that N is enough Greatly, Gauss assumes to set up,Distribution may be expressed as:
It is hereby achieved thatConfidence interval:
WhereinIt is just being distributed very much for standardQuantile.Formula (6), (7) are substituted into (10), are had:
Therefore, it will be assumed that check problem is converted to Estimating Confidence Interval, sentences to whether each pixel counts with distribution It is fixed, to be divided into several homogeneity pixel point sets.It is overall it is expected that μ (P) is not known, setting one first in actual treatment Biggish search window (such as 32 × 32 pixels centered on reference pixel P), by the sample of all pixels in search range This mean valueAs the estimated value of μ (P), initial confidence interval, the picture between fallen with original area are calculated with α=50% It is plain to be temporarily determined as homologous pints;Then it is iterated, with initial homogeneity point set ΩinitFor new range computationMake For the new estimated value of μ (P), new confidence interval is calculated with α=5%, with updated confidence interval in initial homogeneity point set Pixel determined that falling between new district is homologous pints again.After obtaining new homogeneity point set, the connectivity of pixel is carried out Assessment, only retains the point being directly or indirectly connected with reference point P, to obtain the statistics homogeneity set of pixels about P.
2.4.2 based on backscatter intensity (intensity) filtering of statistics homogeneity set of pixels
Traditional Lee filtering is modified, as unit of the statistics homogeneity set of pixels obtained in previous step, to back scattering Intensity carries out Speckle reduction, obtains filtered N width SAR intensity map.
2.4.3 the interference Coherence Estimation based on statistics homogeneity set of pixels
The interference coherence factor calculation formula of one pixel is as follows:
Wherein S1(i) and S2(i) complex values of principal and subordinate's image respective pixel i for interference are represented, K is represented where pixel i Statistics homogeneity set of pixels in pixel number.To obtain more accurate Coherence Estimation, firstly, using reference number height The landform phase of journey simulation eliminates deviation caused by interferometric phase;Secondly, as unit of the homogeneity collection where each pixel into Row coherence factor calculates, to remove estimated bias caused by image quality;Finally, to all samples in a homogeneity set of pixels This, using the method for sampling with replacement (Bootstrap), it is corresponding relevant to calculate each sample using formula (12) for repeatedly sample R times Coefficient obtains one group of estimation γ of coherence factor12,…,γR, coherence factor and original estimation using these Sampling Estimations Coherence factor γ, the unbiased esti-mator of the coherence factor is obtained by following formula:
The estimation of 2.5VV/VH POLARIZATION CHANNEL amplitude dispersion
Using filtered time series amplitude, the amplitude dispersion of VV/VH POLARIZATION CHANNEL is respectively calculated, formula It is as follows, to any pixel:
Wherein amp dispersion is the amplitude of backscatter intensity conversion, denominator mean (amp) and molecule std It (amp) is the amplitude mean value and standard deviation calculated along time series respectively.
2.6 gray level co-occurrence matrixes (GLCM) texture feature extraction
Using original haplopia SAR image, the GLCM variance of two POLARIZATION CHANNELs of VV/VH is calculated separately, to time series chart As being calculated one by one.
2.7 geocoding
The SAR image feature layer that this method is extracted includes: time series VV/VH polarization backscatter intensity, time series VV/VH polarization interference coherence factor, time series GLCM variance, VV/VH POLARIZATION CHANNEL amplitude dispersion, VV/VH POLARIZATION CHANNEL Count homogeneity pixel segmentation result.
3. feature-based fusion and classification
Using random forests algorithm, the optical image feature and SAR image feature that previous step is extracted input classifier into Row fusion.
Visual interpretation is carried out using the high-resolution optical image of contemporaneity in Google Earth and obtains training sample, Random forest disaggregated model is parameterized.Parameterized model is applied to the overall situation, obtains terrain classification result.
4. accuracy evaluation
Visual interpretation is carried out using the high-resolution optical image of contemporaneity in Google Earth and obtains verifying sample, The confusion matrix that classification results are calculated according to the sample point of acquisition, calculates user's precision (User ' s of each classification extraction accordingly Accuracy), generate precision (Producer ' s Accuracy) and overall classification accuracy (Overall Accuracy) and Kappa coefficient.
Embodiment 2
Another embodiment according to the present invention provides a kind of terrain classification device based on multispectral image and SAR image, Referring to Fig. 3, comprising:
Multispectral image feature extraction unit 201, for obtaining the multispectral image of predeterminable area, and to multispectral image Carry out multispectral image feature extraction;
Time series SAR image feature extraction unit 202, for obtaining the time series SAR image of predeterminable area, and it is right Time series SAR image carries out time series SAR image feature and extracts;
Feature-based fusion unit 203, for carrying out feature level to multispectral image feature and time series SAR image feature Fusion obtains terrain classification result.
The terrain classification device based on multispectral image and SAR image in the embodiment of the present invention, utilizes synthetic aperture thunder The advantage short up to (SAR) round-the-clock, all weather operations, revisiting period obtains long-term sequence SAR image, increases input Characteristic dimension;Feature-based fusion, while abundant benefit spectral information, binding time sequence are carried out to multispectral and SAR image Ground object structure, texture and the Electromagnetic Scattering Characteristics that SAR image is reflected are arranged to assist atural object to interpret.
In as a preferred technical scheme, referring to fig. 4, device further include:
Accuracy evaluation unit 204, for carrying out accuracy evaluation to the terrain classification result of acquisition.
Innovative technology point of the invention and beneficial effect at least that:
1. using dual polarization or multipolarization SAR data, and time series SAR data is used, to each pole of each phase The SAR image for changing channel carries out feature extraction;
2. being pressed down using the adaptive filter method based on statistics homogeneity set of pixels to the coherent spot of backscatter intensity System, the backscatter intensity time series after being denoised are reevaluated as characteristic layer, and based on statistics homogeneity set of pixels Interference coherence factor as another feature layer;
3. the statistics homogeneity pixel image segmentation result of each POLARIZATION CHANNEL is included in characteristic layer;
4. using the SAR amplitude dispersion of each POLARIZATION CHANNEL as characteristic layer.
Compared with the conventional method, it is an advantage of the invention that by using machine learning algorithm, to optics and SAR image into Row feature-based fusion, can make full use of the advantage of satellite-borne synthetic aperture radar (SAR) round-the-clock, all weather operations, and acquisition is covered The long-term sequence image in lid research area overcomes the problems, such as that optical image is limited valid data scarcity by cloud desk;Utilize SAR image The dual polarization that has, multipolarization feature, further increase input feature vector dimension;Using based on the adaptive of statistics homogeneity set of pixels It answers filtering method effectively to denoise SAR back scattering feature and interference coherence factor, improves classification performance;Increase base In the statistics image segmentation result of homogeneity set of pixels, SAR back scattering amplitude dispersion two indices as new characteristic layer, into One step improves nicety of grading.
Referring to Fig. 5-7, Sentinel-1 time series SAR image, and the Sentinel- in the same research area of covering are utilized 2 multispectral images carry out terrain classification, test the validity of the method for the present invention.To only with optical signature, only with SAR spy Sign, and nicety of grading, the computational efficiency of the various machine learning algorithms of optics and SAR feature is combined to compare.Wherein:
Fig. 5 is classification results and accuracy evaluation of the maximum likelihood method terrain classification result (a) only with SAR feature;(b) only Using the classification results of optical signature;(c) classification results of feature-based fusion are carried out to optics and SAR image;
Fig. 6 is classification results and accuracy evaluation of the algorithm of support vector machine terrain classification result (a) only with SAR feature; (b) only with the classification results of optical signature;(c) classification results of feature-based fusion are carried out to optics and SAR image.
Fig. 7 is classification results of the random forests algorithm terrain classification result (a) only with SAR feature;(b) only with optics The classification results of feature;(c) classification results of feature-based fusion are carried out to optics and SAR image.
The overall classification accuracy comparison of three kinds of algorithms is as follows:
1. algorithms of different classification accuracy assessment of table
The time-consuming comparison that three kinds of algorithms complete a subseries is as follows:
The comparison of 2. different classifications algorithm time-consuming of table
Table 2 is as can be seen that complete a subseries under same input condition, RF time-consuming is most short, and ML takes second place, SVM time-consuming longest. No matter using which kind of feature input, maximum likelihood (ML) method nicety of grading is all minimum.It is random gloomy when only with SAR feature The nicety of grading highest of woods (RF) algorithm, is higher by 6.5% than SVM algorithm.When only with optical signature, point of SVM and RF algorithm Class precision is similar, 93% or so.After carrying out feature-based fusion to optics and SAR, the nicety of grading of SVM and RF algorithm all compared with It increases when only with optical signature or only with SAR feature.After Fusion Features, SVM and RF equally reached 95% with On overall classification accuracy, but under equivalent feature input condition, it is the tens of RF algorithm that SVM algorithm, which completes a subseries time-consuming, Times.Analysis shows, the method proposed by the present invention based on the classification of optics, SAR feature-based fusion and random forest has bright above Aobvious superiority.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
In the above embodiment of the invention, it all emphasizes particularly on different fields to the description of each embodiment, does not have in some embodiment The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others Mode is realized.Wherein, system embodiment described above is only schematical, such as the division of unit, can be one kind Logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine or can To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of unit or module, It can be electrical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple units On.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
It, can if integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product To be stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention substantially or Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products Out, which is stored in a storage medium, including some instructions are used so that a computer equipment (can be personal computer, server or network equipment etc.) executes all or part of step of each embodiment method of the present invention Suddenly.And storage medium above-mentioned includes: USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory The various media that can store program code such as (RAM, Random Access Memory), mobile hard disk, magnetic or disk.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (10)

1. a kind of terrain classification method based on multispectral image and SAR image, which comprises the following steps:
S101: the multispectral image of predeterminable area is obtained, and multispectral image feature extraction is carried out to multispectral image;
S102: the time series SAR image of predeterminable area is obtained, and time series SAR image is carried out to time series SAR image Feature extraction;
S103: feature-based fusion is carried out to multispectral image feature and time series SAR image feature, obtains terrain classification knot Fruit.
2. the method according to claim 1, wherein the method also includes:
S104: accuracy evaluation is carried out to the terrain classification result of acquisition.
3. according to the method described in claim 2, it is characterized in that, the step S101 includes:
It chooses and covers same research multispectral image of area's cloud desk less than 10%;Multispectral image is pre-processed, is then adopted The vegetation index NDVI and water body index NDWI in multispectral image are calculated with Top Of Atmosphere TOA reflectivity;
Vegetation index (NDVI) calculation formula is as follows:
Water body index (NDWI) calculation formula is as follows:
Wherein, BandGreen、BandNIR、BandRedGreen, infrared, red spectral band the Top Of Atmosphere of respectively multispectral image is anti- Penetrate rate.
4. according to the method described in claim 3, it is characterized in that, the step S102 includes:
The time series SAR image for choosing the same research area of covering, pre-processes time series SAR image: being taken based on The adaptive filter method for counting homogeneity set of pixels calculates interference coherence factor, the estimation of VV/VH POLARIZATION CHANNEL amplitude dispersion, ash Spend co-occurrence matrix GLCM texture feature extraction.
5. according to the method described in claim 4, it is characterized in that, the adaptive filter for being taken based on statistics homogeneity set of pixels Wave method calculates interference coherence factor
According to the statistical distribution of each pixel, choosing with the pixel collection being distributed is statistics homogeneity set of pixels;Haplopia complex data A large amount of superimposed scattering strength Z (P) of distributed diffusion body are a multiple Gauss stochastic variables in the SAR image of SLC format, side Difference isAccording to SAR image distributed diffusion body statistical theory, the amplitude of the SLC image of haplopia SAR Rayleigh distributed, and it is expected to be respectively as follows: with variance
One measurement standard of SAR smudges noise level, the i.e. coefficient of variation are defined as
For the stochastic variable A (P) of rayleigh distributed, formula (3) are substituted into (4), coefficient of variation CVAIt is constant:
For multiple view picture, number is regarded as L, then above formula becomes:
Therefore variance may be expressed as: with desired relationship
Var (A (P))=[CVA*E(A(P))]2=[CVA*μ(P)]2(7);
Wherein μ (P) indicates the expectation E (A (P)) of stochastic variable A (P);
Assuming that there is N phase SAR image, for any pixel P, amplitude forms a N-dimensional vector along time shaft:
A (P)=[A1(P),A2(P),…AN(P)]T(8);
The sample point estimation of the expectation E (P) of A (P) can be expressed asAccording to center Limit theorem,As the increase of sample number N gradually approaches Gaussian Profile;Assuming that N is sufficiently large, Gauss assumes to set up,Distribution may be expressed as:
It is hereby achieved thatConfidence interval:
WhereinIt is just being distributed very much for standardQuantile;Formula (6), (7) are substituted into (10), are had:
The statistics homogeneity set of pixels about P is obtained by formula (11);
Traditional Lee filtering is modified, as unit of the statistics homogeneity set of pixels of acquisition, is concerned with to backscatter intensity Spot inhibits, and obtains filtered N width SAR intensity map;
The interference coherence factor calculation formula of one pixel is as follows:
Wherein S1(i) and S2(i) complex values of principal and subordinate's image respective pixel i for interference are represented, K represents the system where pixel i Count the number of pixel in homogeneity set of pixels;The corresponding coherence factor of each sample is calculated using formula (12), obtains coherence factor One group of estimation γ12,…,γR, using the coherence factor of these Sampling Estimations and the coherence factor γ of original estimation, under Formula obtains the unbiased esti-mator of the coherence factor:
6. according to the method described in claim 5, it is characterized in that, described obtain the statistics homogeneity picture about P by formula (11) Element collects
Search window is set first, by the sample average of all pixels in search rangeAs the estimated value of μ (P), with α =50% calculates initial confidence interval, and the pixel between fallen with original area is temporarily determined as homologous pints;
Then it is iterated, with initial homogeneity point set ΩinitFor new range computationThe estimated value new as μ (P), New confidence interval is calculated with α=5%, and the pixel in initial homogeneity point set is sentenced again with updated confidence interval Fixed, falling between new district is homologous pints;
After obtaining new homogeneity point set, the connectivity of pixel is assessed, only reservation is directly or indirectly connected with reference point P Point, to obtain the statistics homogeneity set of pixels about P.
7. according to the method described in claim 5, it is characterized in that, described corresponding relevant using each sample of formula (12) calculating Coefficient includes:
Deviation caused by landform phase removal interferometric phase first using the simulation of reference number elevation;
Secondly coherence factor calculating is carried out as unit of the homogeneity collection where each pixel, is estimated caused by image quality to remove Count deviation;
Finally to all samples in a homogeneity set of pixels, using the method for sampling with replacement Bootstrap, repeatedly sample R times is obtained To one group of estimation γ of coherence factor12,…,γR
8. according to the method described in claim 4, it is characterized in that, VV/VH POLARIZATION CHANNEL amplitude dispersion estimation includes:
Using filtered time series amplitude, the amplitude dispersion of VV/VH POLARIZATION CHANNEL is respectively calculated, to any picture Element:
Wherein amp dispersion is the amplitude of backscatter intensity conversion, denominator mean (amp) and molecule std (amp) It is the amplitude mean value and standard deviation calculated along time series respectively;
The gray level co-occurrence matrixes GLCM texture feature extraction includes:
Using original haplopia SAR image, calculate separately the GLCM variance of two POLARIZATION CHANNELs of VV/VH, to time-series image into Row calculates;
The step S103 includes:
Using random forests algorithm, multispectral image feature is merged with time series SAR image feature input classifier;
The step S104 includes:
The confusion matrix that classification results are calculated according to the sample point of acquisition, calculates user's precision, the life of each classification extraction accordingly At precision and overall classification accuracy, Kappa coefficient.
9. a kind of terrain classification device based on multispectral image and SAR image characterized by comprising
Multispectral image feature extraction unit for obtaining the multispectral image of predeterminable area, and carries out multispectral image more Spectrum image feature extraction;
Time series SAR image feature extraction unit, for obtaining the time series SAR image of predeterminable area, and to time sequence It arranges SAR image and carries out the extraction of time series SAR image feature;
Feature-based fusion unit is obtained for carrying out feature-based fusion to multispectral image feature and time series SAR image feature Take terrain classification result.
10. device according to claim 9, which is characterized in that described device further include:
Accuracy evaluation unit, for carrying out accuracy evaluation to the terrain classification result of acquisition.
CN201811555032.2A 2018-12-18 2018-12-18 Ground feature classification method and device based on multispectral image and SAR image Active CN109711446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811555032.2A CN109711446B (en) 2018-12-18 2018-12-18 Ground feature classification method and device based on multispectral image and SAR image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811555032.2A CN109711446B (en) 2018-12-18 2018-12-18 Ground feature classification method and device based on multispectral image and SAR image

Publications (2)

Publication Number Publication Date
CN109711446A true CN109711446A (en) 2019-05-03
CN109711446B CN109711446B (en) 2021-01-19

Family

ID=66256837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811555032.2A Active CN109711446B (en) 2018-12-18 2018-12-18 Ground feature classification method and device based on multispectral image and SAR image

Country Status (1)

Country Link
CN (1) CN109711446B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210574A (en) * 2019-06-13 2019-09-06 中国科学院自动化研究所 Diameter radar image decomposition method, Target Identification Unit and equipment
CN110501716A (en) * 2019-07-29 2019-11-26 武汉大学 Earth surface classification method based on single-photon laser radar ambient noise rate
CN111077525A (en) * 2019-12-20 2020-04-28 长安大学 Surface dimension deformation calculation method and system fusing SAR and optical offset technology
CN111427375A (en) * 2020-03-09 2020-07-17 深圳中科保泰科技有限公司 Micro-area intelligent division method and system for unmanned aerial vehicle patrolling and patrolling
CN111798132A (en) * 2020-07-06 2020-10-20 北京师范大学 Dynamic farmland monitoring method and system based on multi-source time sequence remote sensing depth coordination
CN112098733A (en) * 2020-09-22 2020-12-18 北京环境特性研究所 Electromagnetic scattering characteristic data interpolation generation method and device
CN112257515A (en) * 2020-09-29 2021-01-22 河南大学 SAR image terrain classification method and device based on complex terrain
CN112270675A (en) * 2020-11-11 2021-01-26 中山大学 Urban waterlogging area detection method based on polarized radar remote sensing image
CN112668400A (en) * 2020-12-08 2021-04-16 深圳先进技术研究院 Vegetation identification method and application
CN112733746A (en) * 2021-01-14 2021-04-30 中国海洋大学 Collaborative classification method for fusing InSAR coherence and multispectral remote sensing
CN112906645A (en) * 2021-03-15 2021-06-04 山东科技大学 Sea ice target extraction method with SAR data and multispectral data fused
CN113009481A (en) * 2021-01-15 2021-06-22 扬州哈工科创机器人研究院有限公司 Forest surface feature imaging inversion method based on interferometric SAR radar
CN113205475A (en) * 2020-01-16 2021-08-03 吉林大学 Forest height inversion method based on multi-source satellite remote sensing data
CN114299402A (en) * 2022-03-07 2022-04-08 成都理工大学 Hidden danger point automatic identification method, electronic equipment and computer readable storage medium
CN114492210A (en) * 2022-04-13 2022-05-13 潍坊绘圆地理信息有限公司 Hyperspectral satellite borne data intelligent interpretation system and implementation method thereof
CN114639005A (en) * 2022-05-20 2022-06-17 湖北省国土测绘院 Multi-classifier fused crop automatic classification method and system and storage medium
CN115035413A (en) * 2022-06-30 2022-09-09 河南理工大学 Multi-temporal active and passive remote sensing random forest crop identification method and system
CN116343053A (en) * 2022-12-27 2023-06-27 生态环境部卫星环境应用中心 Automatic solid waste extraction method based on fusion of optical remote sensing image and SAR remote sensing image
CN117173584A (en) * 2023-08-02 2023-12-05 宁波大学 Land small micro water body extraction method and device for fusion of PolSAR and Pan images
CN118053069A (en) * 2024-01-04 2024-05-17 华南农业大学 Crop identification method and system based on sequential SAR data multi-feature fusion

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685155A (en) * 2008-09-27 2010-03-31 中国科学院电子学研究所 Method of optimizing interference coefficient of coherence on the basis of polarimetric synthetic aperture radar (SAR)
CN102663394A (en) * 2012-03-02 2012-09-12 北京航空航天大学 Method of identifying large and medium-sized objects based on multi-source remote sensing image fusion
KR101405435B1 (en) * 2012-12-14 2014-06-11 한국항공우주연구원 Method and apparatus for blending high resolution image
US20140301662A1 (en) * 2013-03-17 2014-10-09 ISC8 Inc. Analysis, Labeling and Exploitation of Sensor Data in Real Time
CN106295714A (en) * 2016-08-22 2017-01-04 中国科学院电子学研究所 A kind of multi-source Remote-sensing Image Fusion based on degree of depth study
CN107103282A (en) * 2017-03-22 2017-08-29 中国科学院遥感与数字地球研究所 Ultrahigh resolution diameter radar image sorting technique
CN108549080A (en) * 2018-02-28 2018-09-18 中国电力科学研究院有限公司 A kind of transmission tower position extracting method and system
CN108960300A (en) * 2018-06-20 2018-12-07 北京工业大学 A kind of urban land use information analysis method based on deep neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685155A (en) * 2008-09-27 2010-03-31 中国科学院电子学研究所 Method of optimizing interference coefficient of coherence on the basis of polarimetric synthetic aperture radar (SAR)
CN102663394A (en) * 2012-03-02 2012-09-12 北京航空航天大学 Method of identifying large and medium-sized objects based on multi-source remote sensing image fusion
KR101405435B1 (en) * 2012-12-14 2014-06-11 한국항공우주연구원 Method and apparatus for blending high resolution image
US20140301662A1 (en) * 2013-03-17 2014-10-09 ISC8 Inc. Analysis, Labeling and Exploitation of Sensor Data in Real Time
CN106295714A (en) * 2016-08-22 2017-01-04 中国科学院电子学研究所 A kind of multi-source Remote-sensing Image Fusion based on degree of depth study
CN107103282A (en) * 2017-03-22 2017-08-29 中国科学院遥感与数字地球研究所 Ultrahigh resolution diameter radar image sorting technique
CN108549080A (en) * 2018-02-28 2018-09-18 中国电力科学研究院有限公司 A kind of transmission tower position extracting method and system
CN108960300A (en) * 2018-06-20 2018-12-07 北京工业大学 A kind of urban land use information analysis method based on deep neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BENJAMIN BECHTEL ; LINDA SEE ; GERALD MILLS ; MÍCHEÁL FOLEY: "Classification of Local Climate Zones Using SAR and Multispectral Data in an Arid Environment", 《 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *
于秀兰: "多光谱和_SAR遥感图像融合分类的特征选取", 《红外与毫米波学报》 *
孙佳琛: "基于统计推理的多源图象融合算法及其应用研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
曾钰: "《SAR与光学影像融合的变化信息提取》", 30 June 2015, 中国科学技术出版社 *
罗丹: "高分辨率光学与极化SAR影像城市地物协同分类", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210574A (en) * 2019-06-13 2019-09-06 中国科学院自动化研究所 Diameter radar image decomposition method, Target Identification Unit and equipment
CN110210574B (en) * 2019-06-13 2022-02-18 中国科学院自动化研究所 Synthetic aperture radar image interpretation method, target identification device and equipment
CN110501716A (en) * 2019-07-29 2019-11-26 武汉大学 Earth surface classification method based on single-photon laser radar ambient noise rate
CN111077525A (en) * 2019-12-20 2020-04-28 长安大学 Surface dimension deformation calculation method and system fusing SAR and optical offset technology
CN111077525B (en) * 2019-12-20 2022-12-27 长安大学 Surface three-dimensional deformation calculation method and system integrating SAR and optical offset technology
CN113205475A (en) * 2020-01-16 2021-08-03 吉林大学 Forest height inversion method based on multi-source satellite remote sensing data
CN113205475B (en) * 2020-01-16 2022-07-12 吉林大学 Forest height inversion method based on multi-source satellite remote sensing data
CN111427375B (en) * 2020-03-09 2024-01-09 深圳块织类脑智能科技有限公司 Micro-area intelligent division method and system for unmanned aerial vehicle inspection tour
CN111427375A (en) * 2020-03-09 2020-07-17 深圳中科保泰科技有限公司 Micro-area intelligent division method and system for unmanned aerial vehicle patrolling and patrolling
CN111798132B (en) * 2020-07-06 2023-05-02 北京师范大学 Cultivated land dynamic monitoring method and system based on multi-source time sequence remote sensing depth cooperation
CN111798132A (en) * 2020-07-06 2020-10-20 北京师范大学 Dynamic farmland monitoring method and system based on multi-source time sequence remote sensing depth coordination
CN112098733A (en) * 2020-09-22 2020-12-18 北京环境特性研究所 Electromagnetic scattering characteristic data interpolation generation method and device
CN112257515A (en) * 2020-09-29 2021-01-22 河南大学 SAR image terrain classification method and device based on complex terrain
CN112257515B (en) * 2020-09-29 2022-09-23 河南大学 SAR image terrain classification method and device based on complex terrain
CN112270675A (en) * 2020-11-11 2021-01-26 中山大学 Urban waterlogging area detection method based on polarized radar remote sensing image
CN112668400A (en) * 2020-12-08 2021-04-16 深圳先进技术研究院 Vegetation identification method and application
CN112733746A (en) * 2021-01-14 2021-04-30 中国海洋大学 Collaborative classification method for fusing InSAR coherence and multispectral remote sensing
CN112733746B (en) * 2021-01-14 2022-06-28 中国海洋大学 Collaborative classification method for fusing InSAR coherence and multispectral remote sensing
CN113009481A (en) * 2021-01-15 2021-06-22 扬州哈工科创机器人研究院有限公司 Forest surface feature imaging inversion method based on interferometric SAR radar
CN112906645A (en) * 2021-03-15 2021-06-04 山东科技大学 Sea ice target extraction method with SAR data and multispectral data fused
CN112906645B (en) * 2021-03-15 2022-08-23 山东科技大学 Sea ice target extraction method with SAR data and multispectral data fused
CN114299402A (en) * 2022-03-07 2022-04-08 成都理工大学 Hidden danger point automatic identification method, electronic equipment and computer readable storage medium
CN114299402B (en) * 2022-03-07 2022-05-20 成都理工大学 Hidden danger point automatic identification method, electronic equipment and computer readable storage medium
CN114492210B (en) * 2022-04-13 2022-07-19 潍坊绘圆地理信息有限公司 Hyperspectral satellite borne data intelligent interpretation system and implementation method thereof
CN114492210A (en) * 2022-04-13 2022-05-13 潍坊绘圆地理信息有限公司 Hyperspectral satellite borne data intelligent interpretation system and implementation method thereof
CN114639005A (en) * 2022-05-20 2022-06-17 湖北省国土测绘院 Multi-classifier fused crop automatic classification method and system and storage medium
CN115035413A (en) * 2022-06-30 2022-09-09 河南理工大学 Multi-temporal active and passive remote sensing random forest crop identification method and system
CN115035413B (en) * 2022-06-30 2023-08-29 河南理工大学 Multi-time-phase active and passive remote sensing random forest crop identification method and system
CN116343053A (en) * 2022-12-27 2023-06-27 生态环境部卫星环境应用中心 Automatic solid waste extraction method based on fusion of optical remote sensing image and SAR remote sensing image
CN116343053B (en) * 2022-12-27 2024-02-09 生态环境部卫星环境应用中心 Automatic solid waste extraction method based on fusion of optical remote sensing image and SAR remote sensing image
CN117173584A (en) * 2023-08-02 2023-12-05 宁波大学 Land small micro water body extraction method and device for fusion of PolSAR and Pan images
CN118053069A (en) * 2024-01-04 2024-05-17 华南农业大学 Crop identification method and system based on sequential SAR data multi-feature fusion

Also Published As

Publication number Publication date
CN109711446B (en) 2021-01-19

Similar Documents

Publication Publication Date Title
CN109711446A (en) A kind of terrain classification method and device based on multispectral image and SAR image
Ok et al. Evaluation of random forest method for agricultural crop classification
Halme et al. Utility of hyperspectral compared to multispectral remote sensing data in estimating forest biomass and structure variables in Finnish boreal forest
Nath et al. A survey of image classification methods and techniques
Lu et al. A survey of image classification methods and techniques for improving classification performance
Yuan et al. Remote sensing image segmentation by combining spectral and texture features
CN105138970B (en) Classification of Polarimetric SAR Image method based on spatial information
CN105930772A (en) City impervious surface extraction method based on fusion of SAR image and optical remote sensing image
Reis et al. Identification of hazelnut fields using spectral and Gabor textural features
Liu et al. Unsupervised change detection in multispectral remote sensing images via spectral-spatial band expansion
CN108460342A (en) Hyperspectral image classification method based on convolution net and Recognition with Recurrent Neural Network
CN102982338B (en) Classification of Polarimetric SAR Image method based on spectral clustering
Lin et al. Classification of tree species in overstorey canopy of subtropical forest using QuickBird images
CN106339674A (en) Hyperspectral image classification method based on edge preservation and graph cut model
Csillik et al. Cropland mapping from Sentinel-2 time series data using object-based image analysis
CN104881868A (en) Method for extracting phytocoenosium spatial structure
S Bhagat Use of remote sensing techniques for robust digital change detection of land: A review
CN114926748A (en) Soybean remote sensing identification method combining Sentinel-1/2 microwave and optical multispectral images
CN105718942A (en) Hyperspectral image imbalance classification method based on mean value drifting and oversampling
CN107256407A (en) A kind of Classification of hyperspectral remote sensing image method and device
Orlíková et al. Land cover classification using sentinel-1 SAR data
Costa et al. Spatio-temporal segmentation applied to optical remote sensing image time series
Radhika et al. Ensemble subspace discriminant classification of satellite images
CN108509835A (en) PolSAR image terrain classification methods based on DFIC super-pixel
CN107644230A (en) A kind of spatial relationship modeling method of remote sensing images object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant