CN102938148A - High-spectrum image texture analysis method based on V-GLCM (Gray Level Co-occurrence Matrix) - Google Patents

High-spectrum image texture analysis method based on V-GLCM (Gray Level Co-occurrence Matrix) Download PDF

Info

Publication number
CN102938148A
CN102938148A CN2012103800404A CN201210380040A CN102938148A CN 102938148 A CN102938148 A CN 102938148A CN 2012103800404 A CN2012103800404 A CN 2012103800404A CN 201210380040 A CN201210380040 A CN 201210380040A CN 102938148 A CN102938148 A CN 102938148A
Authority
CN
China
Prior art keywords
pixel
texture
image
sigma
glcm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012103800404A
Other languages
Chinese (zh)
Inventor
苏红军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN2012103800404A priority Critical patent/CN102938148A/en
Publication of CN102938148A publication Critical patent/CN102938148A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a high-spectrum image texture analysis method based on a V-GLCM (Gray Level Co-occurrence Matrix), comprising the following steps of: selecting high-spectrum image data needing to be subjected to texture analysis; carrying out gray level range conversion on an original image; normalizing a gray level value to a certain range; selecting a suitable movable cubic window size and an angle parameter; taking statistical index information inside a movable cubic body as a texture characteristic of a cubic center picture element; utilizing a picture element relation in a movable cubic window to establish a co-occurrence matrix; carrying out index quantification counting on the established co-occurrence matrix and backfilling to the central position of the current movable cubic window, namely replacing the texture characteristic of the position; and continuously moving the cubic window and carrying out texture calculation and extraction on the whole image to obtain a V-GLCM texture image. According to the high-spectrum image texture analysis method disclosed by the invention, the image texture extracted by the method considers a relation between adjacent wave sections of a high-spectrum image, contains the texture characteristic of the adjacent wave sections, and can sufficiently represent the special properties of the high-spectrum data.

Description

A kind of Hyperspectral imaging texture analysis method based on V-GLCM
Technical field
The invention belongs to the high-spectrum remote sensing processing technology field, be specifically related to a kind of Hyperspectral imaging texture analysis method based on V-GLCM (body gray level co-occurrence matrixes, Volume Gray Level Co-occurrence Matrix).
Background technology
High-spectrum remote-sensing (Hyperspectral Remote Sensing) refers to utilize a lot of narrow electromagnetic wave bands to obtain the technology of object relevant data, it is last 20 years of 20th century one of human major technological breakthroughs of obtaining aspect earth observation, also is current and the remote sensing cutting edge technology in from now on decades.Compare with conventional multispectral remote sensing, high-spectral data has that data volume is large, a lot of very narrow, the feature such as the wave band correlativity is strong, information redundancy is many, collection of illustrative plates is integrated of wave band.But its mass data and high dimensional feature have all brought larger difficulty to transmission and the storage of high-spectral data just, also traditional remote sensing image data treatment technology have been proposed new challenge simultaneously.So, be a problem that perplexs people to fast processing and the abundant excavation of high-spectral data always.In the face of the data of tens of, hundreds of the wave bands of high spectrum, when improving data-handling efficiency; How effectively to utilize, extract, analyze interested maximum information, become the new problem that requires study.
Texture is the key character in the image, effectively utilizes the robotization that these features can further promote image interpretation, and texture analysis can help to suppress foreign matter with the generation of spectrum, the different spectrum phenomenon of jljl.Simultaneously, the Hyperspectral imaging for spatial relationship is complicated, the spectral mixing phenomenon is serious carries out sort research in conjunction with space attribute, can effectively further improve nicety of grading.Therefore to the research of Hyperspectral imaging texture, not only can deepen the theoretical level of Hyperspectral imaging texture research, and nicety of grading that can the Effective Raise Hyperspectral imaging, further promote the widespread use of high-spectrum remote-sensing, this development for high-spectrum remote-sensing has important theory and realistic meaning.Specification and analysis method for texture, carried out very deep research both at home and abroad, such as priorities such as oxazepans multispectral and texture problem Hyperspectral imaging further investigated and analysis have been carried out, proposing texture is that the ground object target spectral space is to the new ideas of the mapped mode in two-dimensional projection space, and carried out discussing with regard to setting up remote sensing images analysis method system take the image segments as the basis etc. [oxazepan. about multispectral and texture problem Hyperspectral imaging. Wuhan University Journal (information science version), 2004,29 (4): 292-295.Oxazepan. about the theory of remote sensing image Treatment Analysis and some problems of method. Wuhan University Journal (information science version), 2007,32 (11): 1007-1015.Oxazepan. satellite remote-sensing image texture analysis and Fractal method. Surveying ﹠ Cartography Scientific ﹠ Technological Univ., Wuhan's journal, 1998,23 (4): 370-373].The main method of at present image texture research can be divided into: Structure Method, statistic law, modelling and mathematic(al) manipulation method etc.
Statistical analysis method can be described the numerical characteristic of texture, and with these features or in conjunction with other non-textural characteristics image is classified, the method mainly comprises grey level histogram, gray level co-occurrence matrixes (Gray Level Co-occurrence Matrix, GLCM) and gray scale run length method etc., wherein GLCM is most widely used.As far back as 1973, Haralick etc. just based on gradation of image in direction, adjacent spaces, the characteristics of the aspects such as amplitude of variation, proposed effectively to describe the GLCM algorithm of texture, and 14 characteristic index [HaralickR.M. have been designed, Shanmugam K., and Dinstein I.H..Texture features for image classification.IEEE Trans.Sys., Man Cybernet., 1973,3 (6): 610-621], wherein the most frequently used have contrast (moment of inertia, Contrast), entropy (Entropy), angle second moment (energy, Angular Second Moment), local stationary (Homogeneity), and diversity (Dissimilarity), average (Mean), variance (Variance), relevant (Correlation) etc. 9 kinds.This algorithm has been widely used and many improvement algorithms of having derived since coming out, and most of texture analysis documents think that all the effect of gray level co-occurrence matrixes is best, but the method is confined to single-range image.Statistical analysis technique obtains the textural characteristics of image with single order, second order or higher-order statistics, adding up the prerequisite with certain meaning although catered to image texture, but all be to extract texture from single yardstick basically, do not reflect the feature on the different scale, in fact texture features main aspect is exactly scale feature.
Traditional GLCM texture analysis all is that each wave band is independently carried out texture analysis when processing target in hyperspectral remotely sensed image, lacks the consideration to texture dependence between the adjacent band.In the high-spectral data cube, owing to exist the correlativity of height between the Hyperspectral imaging adjacent band, consider if when carrying out texture feature extraction, add adjacent band, may obtain the more texture image of horn of plenty of information.
In remote sensing images, texture refers to the image structure that the regular variation of the inner tone of Target scalar causes, and is the important evidence of difference atural object attribute and target decomposition.Many/Hyperspectral imaging is the expression of object spectrum information, the gradation data of some wave bands of each pixel is the set (also being the spectrum vector) of clutter reflections or radiation spectrum information data on the image, the single band image data then can be thought the Special Manifestations of this spectrum vector, and namely the component number is 1 o'clock spectrum vector.Each spectrum vector has an ad-hoc location in spectral space, can think wherein a bit; The position of spectrum vector in spectral space of different atural objects is different, and spectrum vector position in spectral space of same type atural object is identical or very approaching.When considering image texture, people often arrange at gray scale or the color element of two-dimensional space observation image, distribution situation, and two-dimensional space then is formed by the image on a photo or the computer display screens, is the two dimensional surface that various pixels distribute.Nearly all texture concept and the method for texture analysis are all for such two dimensional surface.Since the information of all pixels all is the form of expression of ground object target spectral information or spectrum vector on this plane, so, so-called texture is exactly ground object target spectrum vector rearranging on atural object distribution two-dimensional space in the spectral space, or has distribution in some sense.Like this, texture be exactly point in the object spectrum space to a kind of " mapped mode " of atural object distribution two-dimensional space, different mapped modes is exactly different texture.Should be pointed out that actual ground object target that this two-dimensional space refers to that generally concrete image reflects and the two-dimensional space of residing environment or background thereof, is the ground object space of a part.But carrying out the two-dimensional space that texture analysis faces not is actual ground object space, but the projection of this two-dimensional space, or the projector space of actual atural object Two dimensional Distribution.By above-mentioned analysis, can provide the definition of texture: image texture is that the difference of atural object (or other targets) in spectral space characterizes point to the mapped mode in atural object distribution two-dimensional projection space, and different mapped mode (arrangement in namely usually understanding) has consisted of texture.This mapping is the complex mappings of multi-to-multi.In spectral space, the sign point of atural object generally all is a plurality of points, and wherein each point has many points corresponding with it in ground object target Two dimensional Distribution projector space.
The concept of above texture has solved texture from single band to multiwave mapping problems, but is difficult to be applied to concrete mathematical analysis, and how to process the Hyperspectral imaging texture from the angle of geometric analysis still is a challenge.
Summary of the invention
Goal of the invention: for the problem and shortage of above-mentioned prior art existence, the purpose of this invention is to provide a kind of based on V-GLCM (body gray level co-occurrence matrixes, Volume Gray Level Co-occurrence Matrix) Hyperspectral imaging texture analysis method, GLCM (gray level co-occurrence matrixes) is expanded to the three-dimensional cubic body space, the image texture that extracts has been considered the relation between the Hyperspectral imaging adjacent band, the texture features that contains neighbour's wave band more can demonstrate fully the ins and outs of high-spectral data.
Technical scheme: for achieving the above object, the technical solution used in the present invention is a kind of Hyperspectral imaging texture analysis method based on V-GLCM, comprises the steps:
Step 1, selection need be carried out the target in hyperspectral remotely sensed image data of texture analysis;
Step 2 is carried out the grey-scale range conversion to the image data in the described step 1, gray-scale value is normalized to the scope of 0-255;
Step 3 is selected mobile cube window and the angle parameter of suitable size, with the statistical indicator information of the mobile cube inside textural characteristics as cube center pixel, and utilizes in the mobile cube window pixel that relation is set up co-occurrence matrix;
Step 4 is carried out the quantification of targets statistics to the co-occurrence matrix of having set up, and is backfilled to current moving window center, represents the textural characteristics of the pixel of this position;
Step 5, constantly mobile this cube window carries out texture to whole image and calculates and extract, and obtains the V-GLCM texture image.
Further, the size of mobile cube window adopts semivariable function to calculate in the described step 3:
γ ( h ) = 1 2 N ( h ) Σ [ z ( x ) - z ( x + h ) ] 2
Wherein, γ (h) is the semivariable function value, N (h) represents the right quantity of pixel, ∑ is that pixel is to the quadratic sum of gray scale difference value, z (x) is the gray-scale value of initial pixel on the x axle, z (x+h) is for there being the pixel gray-scale value of h pixel distance apart from initial pixel, h represents the right distance relation of pixel on fixed-direction, and the window size when the value of γ (h) tends towards stability is best mobile cube window size.
Further, following formula is adopted in the foundation of co-occurrence matrix in the described step 3:
Figure BDA00002231551700042
In the formula, m and n represent the position of co-occurrence matrix; M (m, n) be (m in the co-occurrence matrix, n) value of element, representative is in the moving window W of correspondence, and pixel is to concerning δ=(d, θ, ψ) and gray-scale value be respectively m and n pixel to quantity, wherein d is the right distance of pixel, θ is the right horizontal angle angle of pixel, ψ is the right zenith angle angle of pixel, W x, W yAnd W zBe respectively the window size on mobile cubical three dimension x axles, y axle (space dimension) and the z axle (spectrum dimension), dx, dy and dz are that pixel is to concerning δ=(d, θ, vector form ψ) (dx, dy, dz), W (x+dx, y+dy, z+dz) when calculating for co-occurrence matrix, the size of the displacement of mobile cube window on three dimensions.
Further, in the described step 3 in the mobile cube window the right spatial relationship of pixel have following character:
Moving window need to be considered three dimensions, and the right spatial relationship of pixel becomes solid space by the plane, and in the image cube, each pixel has 26 and closes on pixel; Therefore when distance is 1 pixel, center pixel and contiguous pixel have 26 directions, if consideration symmetry, then have 13 kinds may, be respectively (0 °, 0 °), (0 °, 45 °), (0 °, 90 °), (0 °, 135 °), (45 °, 45 °), (45 °, 90 °), (45 °, 135 °), (90 °, 45 °), (90 °, 90 °), (90 °, 135 °), (135 °, 45 °), (135 °, 90 °) and directions such as (135 °, 135 °).Table 1 is listed the V-GLCM pixel to direction with the form of vector.
Pixel is to direction in the table 1V-GLCM model
Figure BDA00002231551700051
Further, in the described step 3 in statistical indicator and the step 4 quantification of targets statistics adopt following 6 indexs of describing the image texture features:
1) variance (Variance): with the characteristic of texture in the amount of variability Description Image:
VAR i = Σ i = 1 N Σ j = 1 N ( v i - μ i ) 2 P ij With VAR j = Σ i = 1 N Σ j = 1 N ( v j - μ j ) 2 P ij
In the formula, VAR iThe variance that represents i pixel, VAR jThe variance that represents j pixel,
Figure BDA00002231551700054
Figure BDA00002231551700055
v iAnd v jRepresent respectively the gray-scale value of i and j pixel, i and j are respectively the position of pixel, and its scope is 1 to N, and N is the sum of pixel, P IjCo-occurrence matrix for step 3 foundation;
2) contrast (Contrast): the tolerance of gray scale difference between each pixel of image, reflected the sharpness of image and the rill depth of texture; This value is higher, and the representative contrast is stronger, and image is more coarse, otherwise image is softer:
CON = Σ i = 1 N Σ j = 1 N ( v i - v j ) 2 P ij
In the formula, CON represents contrast;
3) diversity (Dissimilarity): similar with the implication of contrast representative, different with exponential increase with the weight of gray value differences in the contrast, the weight of the gray value differences of this value is linear relationship:
DIS = Σ i = 1 N Σ j = 1 N | v i - v j | P ij
In the formula, DIS represents anisotropy;
4) energy (Energy): reflected homogeneity or the flatness of area image, the little then image ratio of energy is even or level and smooth:
ASM = Σ i = 1 N Σ j = 1 N P ij 2
In the formula, ASM represents energy;
5) entropy (Entropy): the tolerance of picture material randomness, the mixed and disorderly degree of image texture has been described, the large expression randomness of entropy is more intense:
ENT = - Σ i = 1 N Σ j = 1 N P ij log P ij
In the formula, ENT represents energy;
6) homogeney (Homogeneity): the degree of uniformity of reflection image:
HOM = Σ i = 1 N Σ j = 1 N P ij 1 + ( v i - v j ) 2
In the formula, HOM represents homogeney.
Wherein, comprised concept and the characteristic of Hyperspectral imaging body texture in the step 5 based on the texture of V-GLCM model extraction, that is:
Concept: Hyperspectral imaging body texture is that the difference in ground object target two-dimensional projection space characterizes the real function of point in spectral space, i.e. a kind of real function that has special value at solid space; The every bit of solid space is got a definite value, characterizes point value and has then consisted of the body texture in the set of solid space.
Hyperspectral imaging body texture has the following characteristic that seems contradiction:
1) it is structural: because atural object has certain morphosis, the body texture has certain spatial structure characteristic, in addition, the numerical value at certain 1 x and x+h place has correlativity (value of h has certain limit) to a certain degree in the image, after exceeding certain limit, correlativity dies down even disappears;
2) randomness: because the complicacy of atural object, the body texture has irregular feature;
3) limitations: because the restriction of imagery zone, the body texture is confined to certain geometric space;
4) continuity: different Hyperspectral imaging body textures has continuity in various degree, and this continuity can adopt the semivariable function between the atural object to describe;
5) anisotropy: the body texture is called isotropy when all directions have same nature, otherwise is anisotropy.
Beneficial effect: the present invention adopts the V-GLCM method that the target in hyperspectral remotely sensed image texture is described and extracts, the method not only can be extracted the Local textural feature of image, but also can extract the texture information of adjacent band, Hyperspectral imaging body texture can intactly be described.Simultaneously, propose Hyperspectral imaging body texture concept, traditional texture has been generalized to higher dimensional space, set up Hyperspectral imaging texture description and extracting method based on the V-GLCM model; The method can more effectively be described and extract the image texture in the higher dimensional space, and the textural characteristics data that obtain contain larger quantity of information, and its follow-up nicety of grading is much higher than same class methods.
Description of drawings
Fig. 1 (a) is V-GLCM account form synoptic diagram, and Fig. 1 (b) is that the V-GLCM pixel is to graph of a relation;
Fig. 2 is V-GLCM Hyperspectral imaging texture analysis process flow diagram;
The variogram of Fig. 3 (a) for adopting the inventive method that Hyperspectral imaging DC Mall is extracted, the contrast figure of Fig. 3 (b) for adopting the inventive method that Hyperspectral imaging DC Mall is extracted, the diversity figure of Fig. 3 (c) for adopting the inventive method that Hyperspectral imaging DC Mall is extracted, the energygram of Fig. 3 (d) for adopting the inventive method that Hyperspectral imaging DC Mall is extracted, the entropy chart of Fig. 3 (e) for adopting the inventive method that Hyperspectral imaging DC Mall is extracted, the homogeney figure of Fig. 3 (f) for adopting the inventive method that Hyperspectral imaging DC Mall is extracted;
Fig. 4 (a1) and Fig. 4 (a2) are respectively the variogram that adopts GLCM method and the inventive method to extract, and Fig. 4 (b1) and Fig. 4 (b2) are respectively the contrast figure that adopts GLCM method and the inventive method to extract; Fig. 4 (c1) and Fig. 4 (c2) are respectively the diversity figure that adopts GLCM method and the inventive method to extract; Fig. 4 (d1) and Fig. 4 (d2) are respectively the energygram that adopts GLCM method and the inventive method to extract; Fig. 4 (e1) and Fig. 4 (e2) be respectively adopt that GLCM method and the inventive method extract to entropy than degree figure; Fig. 4 (f1) and Fig. 4 (f2) are respectively the homogeney figure that adopts GLCM method and the inventive method to extract;
The textural characteristics classification results figure of Fig. 5 for adopting the inventive method and GLCM method to extract.
Embodiment
Below in conjunction with the drawings and specific embodiments, further illustrate the present invention, should understand these embodiment only is used for explanation the present invention and is not used in and limits the scope of the invention, after having read the present invention, those skilled in the art all fall within the application's claims limited range to the modification of the various equivalent form of values of the present invention.
Moving window for two dimension in the GLCM texture model can only calculate the single band texture, ignore the shortcoming of texture information between the adjacent band, mobile cube calculates texture as new model window is proposed, thereby spatial relationship is changed the pixel of two dimension into three dimensions, this is the maximum characteristics (shown in Fig. 1 (a) and Fig. 1 (b)) of V-GLCM texture analysis model of the present invention.The Hyperspectral imaging body texture that adopts the V-GLCM model to obtain not only contains image local grain information, but also has comprised the spectral information of adjacent band, can more intactly describe the monopolizing characteristic of Hyperspectral imaging texture.
Fig. 2 is the process flow diagram of a kind of Hyperspectral imaging texture analysis method based on V-GLCM of proposing of the present invention, mainly may further comprise the steps:
(1), selection need be carried out the airborne-remote sensing of texture analysis, and data are carried out necessary pre-service, removes the noise wave band;
(2), the image data in the step 1 is carried out the grey-scale range conversion, gray-scale value is normalized to the scope of 0-255;
(3), select mobile cube window and the angle parameter of suitable size, the concrete employing semivariable function of mobile cube size calculates, and formula is as follows:
γ ( h ) = 1 2 N ( h ) Σ [ z ( x ) - z ( x + h ) ] 2
Wherein, γ (h) is the semivariable function value, N represents the right quantity of pixel, ∑ is that pixel is to the quadratic sum of gray scale difference value, z (x) is the gray-scale value of initial pixel on the x axle, z (x+h) is for having the pixel gray-scale value of h pixel distance, h representative right distance relation of pixel on fixed-direction apart from initial pixel.Window size when the value of γ (h) tends towards stability is best cube size.
(4), with 6 statistical indicator information of mobile cube inside---variance, contrast, diversity, energy, entropy and homogeney are as the textural characteristics of cube center pixel, and statistical indicator adopts following formula to calculate:
1) variance (Variance): with the characteristic of texture in the amount of variability Description Image.
VAR i = Σ i = 1 N Σ j = 1 N ( v i - μ i ) 2 P ij With VAR j = Σ i = 1 N Σ j = 1 N ( v j - μ j ) 2 P ij
In the formula,
Figure BDA00002231551700093
Figure BDA00002231551700094
v iAnd v jRepresent respectively the gray-scale value of i and j pixel, i and j are respectively the position of pixel, and its scope is 1 to N, and N is the sum of pixel, P IjCo-occurrence matrix for the foundation of right 1 step 3.
2) contrast (Contrast): the tolerance of gray scale difference between each pixel of image, reflected the sharpness of image and the rill depth of texture; This value is higher, and the representative contrast is stronger, and image is more coarse, otherwise image is softer.
CON = Σ i = 1 N Σ j = 1 N ( v i - v j ) 2 P ij
3) diversity (Dissimilarity): similar with the implication of contrast representative, different with exponential increase with the weight of gray value differences in the contrast, the weight of the gray value differences of this value is linear relationship.
DIS = Σ i = 1 N Σ j = 1 N | v i - v j | P ij
4) energy (Energy): reflected homogeneity or the flatness of area image, the little then image ratio of energy is even or level and smooth.
ASM = Σ i = 1 N Σ j = 1 N P ij 2
5) entropy (Entropy): the tolerance of picture material randomness, the mixed and disorderly degree of image texture has been described, the large expression randomness of entropy is more intense.
ENT = - Σ i = 1 N Σ j = 1 N P ij log P ij
6) homogeney (Homogeneity): the degree of uniformity of reflection image.
HOM = Σ i = 1 N Σ j = 1 N P ij 1 + ( v i - v j ) 2
(5), and utilize in the mobile cube window pixel that relation is set up co-occurrence matrix, pixel is to concerning totally 13 kinds of possibilities in the mobile cube window, be respectively (0 °, 0 °), (0 °, 45 °), (0 °, 90 °), (0 °, 135 °), (45 °, 45 °), (45 °, 90 °), (45 °, 135 °), (90 °, 45 °), (90 °, 90 °), (90 °, 135 °), (135 °, 45 °), (135 °, 90 °) and directions such as (135 °, 135 °).The specific implementation of co-occurrence matrix algorithm adopts following formula:
Figure BDA00002231551700101
In the formula, m and n represent the position of co-occurrence matrix; M (m, n) be (m in the co-occurrence matrix, n) value of element, representative is in the moving window W of correspondence, and pixel is to concerning δ=(d, θ, ψ) and gray-scale value be respectively the pixel of m and n to quantity, wherein d is the right distance of pixel, and θ is the right horizontal angle angle of pixel, and ψ is the right zenith angle angle of pixel.W x, W yAnd W zFor moving the window size on cubical three dimension x axles and y axle (space dimension), the z axle (spectrum dimension), dx, dy and dz are that pixel is to concerning δ=(d, θ, vector form ψ) (dx, dy, dz), W (x+dx, y+dy, z+dz) when calculating for co-occurrence matrix, the size of the displacement of mobile cube window on three dimensions.
(6), the co-occurrence matrix of having set up is carried out the quantification of targets statistics, and be backfilled to current moving window center, namely replace the textural characteristics of the pixel of this position;
(7), constantly mobile this cube window carries out texture to whole image and calculates and extract, and obtains the V-GLCM data texturing of whole image;
(8), utilization obtains data texturing classifies after spectroscopic data is combined, and adopts the classification overall accuracy that the performance of the method for extraction texture is assessed.
Embodiment of the present invention all realize in computer system.
In order further to understand the present invention and to verify actual effect, carried out following experiment:
These data are Hyperspectral imagings of the Washington D.C. that obtained by HYDICE (Hyperspectral Digital Imagery Collection Experiment) sensor.This data cover from 210 wave bands of 0.4 to 2.5um spectrum range, its spatial resolution is about 2.8m; After having rejected water absorption bands and noise wave band, kept 191 wave bands and be used for data analysis.DC I size of data is 304 * 301, comprises road (Road), meadow (Grass), water body (Water), path (Trail), trees (Tree) and building 6 classifications such as (Roof).Fig. 3 (a) is to Fig. 3 (f) 6 textural characteristics figure such as variance, contrast, diversity, energy, entropy and homogeney for utilizing the V-GLCM texture method to extract.
In order to compare the performance of V-GLCM method and GLCM method, the DC I data texture feature that two kinds of distinct methods are extracted under uniform window size, equidirectional compares.Utilize at first, respectively GLCM method and V-GLCM method to extract the Hyperspectral imaging textural characteristics; Then, the textural characteristics that extracts is joined respectively raw video data, a K preferred bands and K cluster centre etc. and participate in together classification; Contrast at last the classification results of two kinds of methods, method is assessed.Because the rotational invariance of texture, two kinds of methods are all got the texture image average of 4 directions as the final follow-up comparative analysis of texture image participation.Wherein, the window size of V-GLCM method is 7 * 7 * 7, and orientation angle is (135,135), and the window size of GLCM method is 7 * 7, and direction is 135 degree.
Fig. 4 (a1) is the comparison diagram of 6 textural characteristics such as variance, contrast, diversity, energy, entropy and homogeney of two kinds of different texture methods extractions to Fig. 4 (f2), the left figure of each texture index is the result that the GLCM method is extracted, and right figure is the result that the V-GLCM method is extracted.The texture that two kinds of algorithms (direction and distance parameter are the same) are extracted carries out 5 spectroscopic datas such as cluster centre that 5 wave bands (62,110,25,95,152 etc.) of PCA compression (choosing front 5 major components), band selection, semi-supervised clustering algorithm extract with all 191 wave bands of raw data, raw data respectively is combined (forming four kinds of classification modes), carries out classification experiments research.V-GLCM and GLCM algorithm are as shown in table 2 to the classification results of DC Mall I data.
Table 2V-GLCM and the contrast of GLCM classification results
Figure BDA00002231551700111
Figure BDA00002231551700121
For more clearly nicety of grading and the effect of each scheme of comparative analysis, Fig. 5 has provided the nicety of grading comparison diagram of various schemes.Wherein leftmost post figure is traditional image classification precision of not adding textural characteristics in each scheme, and middle post figure is the image classification precision behind the interpolation GLCM textural characteristics, and rightmost post figure is the image classification precision behind the interpolation V-GLCM textural characteristics.As seen from Figure 5, behind the interpolation texture information, the image classification precision is largely increased; The precision that while V-GLCM method obtains is higher than the nicety of grading that the GLCM method obtains.The results show, the V-GLCM texture method that the present invention proposes can be effectively be described, extract the texture of target in hyperspectral remotely sensed image; And texture information is joined in traditional spectral data classification, can the Effective Raise nicety of grading.

Claims (5)

1. the Hyperspectral imaging texture analysis method based on V-GLCM is characterized in that, comprise the steps: step 1, selection need be carried out the target in hyperspectral remotely sensed image data of texture analysis;
Step 2 is carried out the grey-scale range conversion to the image data in the described step 1, gray-scale value is normalized to the scope of 0-255;
Step 3 is selected mobile cube window and the angle parameter of suitable size, with the statistical indicator information of the mobile cube inside textural characteristics as cube center pixel, and utilizes in the mobile cube window pixel that relation is set up co-occurrence matrix;
Step 4 is carried out the quantification of targets statistics to the co-occurrence matrix of having set up, and is backfilled to current moving window center, represents the textural characteristics of the pixel of this position;
Step 5, constantly mobile this cube window carries out texture to whole image and calculates and extract, and obtains the V-GLCM texture image.
2. described a kind of Hyperspectral imaging texture analysis method based on V-GLCM according to claim 1 is characterized in that, the size of mobile cube window adopts semivariable function to calculate in the described step 3:
γ ( h ) = 1 2 N ( h ) Σ [ z ( x ) - z ( x + h ) ] 2
Wherein, γ (h) is the semivariable function value, N (h) represents the right quantity of pixel, ∑ is that pixel is to the quadratic sum of gray scale difference value, z (x) is the gray-scale value of initial pixel on the x axle, z (x+h) is for there being the pixel gray-scale value of h pixel distance apart from initial pixel, h represents the right distance relation of pixel on fixed-direction, and the window size when the value of γ (h) tends towards stability is best mobile cube window size.
3. described a kind of Hyperspectral imaging texture analysis method based on V-GLCM according to claim 1 is characterized in that, following formula is adopted in the foundation of co-occurrence matrix in the described step 3:
Figure FDA00002231551600012
In the formula, m and n represent the position of co-occurrence matrix; M (m, n) be (m in the co-occurrence matrix, n) value of element, representative is in the moving window W of correspondence, and pixel is to concerning δ=(d, θ, ψ) and gray-scale value be respectively m and n pixel to quantity, wherein d is the right distance of pixel, θ is the right horizontal angle angle of pixel, ψ is the right zenith angle angle of pixel, W x, W yAnd W zBe respectively the window size on mobile cubical three dimension x-axis, y-axis and z-axis, dx, dy and dz are that pixel is to concerning δ=(d, θ, vector form ψ), W (x+dx, y+dy, z+dz) when calculating for co-occurrence matrix, the size of the displacement of mobile cube window on three dimensions.
4. described a kind of Hyperspectral imaging texture analysis method based on V-GLCM according to claim 1 is characterized in that, in the described step 3 in the mobile cube window the right spatial relationship of pixel have following character:
Moving window need to be considered three dimensions, and the right spatial relationship of pixel becomes solid space by the plane, and in the image cube, each pixel has 26 and closes on pixel; Therefore when distance is 1 pixel, center pixel and contiguous pixel have 26 directions, if consideration symmetry, then have 13 kinds may, be respectively (0 °, 0 °), (0 °, 45 °), (0 °, 90 °), (0 °, 135 °), (45 °, 45 °), (45 °, 90 °), (45 °, 135 °), (90 °, 45 °), (90 °, 90 °), (90 °, 135 °), (135 °, 45 °), (135 °, 90 °) and (135 °, 135 °) direction.
5. described a kind of Hyperspectral imaging texture analysis method based on V-GLCM according to claim 1 is characterized in that, in the described step 3 in statistical indicator and the step 4 the quantification of targets statistics adopt following 6 indexs of describing the image texture features:
1) variance: with the characteristic of texture in the amount of variability Description Image:
VAR i = Σ i = 1 N Σ j = 1 N ( v i - μ i ) 2 P ij With VAR j = Σ i = 1 N Σ j = 1 N ( v j - μ j ) 2 P ij
In the formula, VAR iThe variance that represents i pixel, VAR jThe variance that represents j pixel, v iAnd v jRepresent respectively the gray-scale value of i and j pixel,
Figure FDA00002231551600023
Figure FDA00002231551600024
I and j are respectively the position of pixel, and its scope is 1 to N, and N is the sum of pixel, P IjCo-occurrence matrix for step 3 foundation;
2) contrast: the tolerance of gray scale difference between each pixel of image, reflected the sharpness of image and the rill depth of texture; This value is higher, and the representative contrast is stronger, and image is more coarse, otherwise image is softer:
CON = Σ i = 1 N Σ j = 1 N ( v i - v j ) 2 P ij
In the formula, CON represents contrast;
3) diversity: similar with the implication of contrast representative, different with exponential increase with the weight of gray value differences in the contrast, the weight of the gray value differences of this value is linear relationship:
DIS = Σ i = 1 N Σ j = 1 N | v i - v j | P ij
In the formula, DIS represents anisotropy;
4) energy: reflected homogeneity or the flatness of area image, the little then image ratio of energy is even or level and smooth:
ASM = Σ i = 1 N Σ j = 1 N P ij 2
In the formula, ASM represents energy;
5) entropy: the tolerance of picture material randomness, the mixed and disorderly degree of image texture has been described, the large expression randomness of entropy is more intense:
ENT = - Σ i = 1 N Σ j = 1 N P ij log P ij
In the formula, ENT represents energy;
6) homogeney: the degree of uniformity of reflection image:
HOM = Σ i = 1 N Σ j = 1 N P ij 1 + ( v i - v j ) 2
In the formula, HOM represents homogeney.
CN2012103800404A 2012-10-09 2012-10-09 High-spectrum image texture analysis method based on V-GLCM (Gray Level Co-occurrence Matrix) Pending CN102938148A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012103800404A CN102938148A (en) 2012-10-09 2012-10-09 High-spectrum image texture analysis method based on V-GLCM (Gray Level Co-occurrence Matrix)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012103800404A CN102938148A (en) 2012-10-09 2012-10-09 High-spectrum image texture analysis method based on V-GLCM (Gray Level Co-occurrence Matrix)

Publications (1)

Publication Number Publication Date
CN102938148A true CN102938148A (en) 2013-02-20

Family

ID=47697041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012103800404A Pending CN102938148A (en) 2012-10-09 2012-10-09 High-spectrum image texture analysis method based on V-GLCM (Gray Level Co-occurrence Matrix)

Country Status (1)

Country Link
CN (1) CN102938148A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103211621A (en) * 2013-04-27 2013-07-24 上海市杨浦区中心医院 Ultrasound directed texture quantitative measuring instrument and method thereof
CN104346800A (en) * 2013-08-02 2015-02-11 南京理工大学 Low-light-level image target detection method based on texture significance
CN107111883A (en) * 2014-10-30 2017-08-29 皇家飞利浦有限公司 Texture analysis figure for view data
CN108009562A (en) * 2017-10-25 2018-05-08 中山大学 A kind of hydrographic water resource feature space variability knows method for distinguishing
US10964036B2 (en) 2016-10-20 2021-03-30 Optina Diagnostics, Inc. Method and system for detecting an anomaly within a biological tissue

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5699452A (en) * 1991-09-27 1997-12-16 E. I. Du Pont De Nemours And Company Method and system of identifying a valid object in a background of an image using a gray level co-occurrence matrix of the image
CN102194127A (en) * 2011-05-13 2011-09-21 中国科学院遥感应用研究所 Multi-frequency synthetic aperture radar (SAR) data crop sensing classification method
CN102495005A (en) * 2011-11-17 2012-06-13 江苏大学 Method for diagnosing crop water deficit through hyperspectral image technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5699452A (en) * 1991-09-27 1997-12-16 E. I. Du Pont De Nemours And Company Method and system of identifying a valid object in a background of an image using a gray level co-occurrence matrix of the image
CN102194127A (en) * 2011-05-13 2011-09-21 中国科学院遥感应用研究所 Multi-frequency synthetic aperture radar (SAR) data crop sensing classification method
CN102495005A (en) * 2011-11-17 2012-06-13 江苏大学 Method for diagnosing crop water deficit through hyperspectral image technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FUAN TSAI等: "Texture Analysis for Three Dimensional Remote Sensing Data by 3D GLCM", 《PROCEEDINGS OF 27TH ASIAN CONFERENCE ON REMOTE SENSING》 *
冯建辉等: "基于灰度共生矩阵提取纹理特征图像的研究", 《背景测绘》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103211621A (en) * 2013-04-27 2013-07-24 上海市杨浦区中心医院 Ultrasound directed texture quantitative measuring instrument and method thereof
CN103211621B (en) * 2013-04-27 2015-07-15 上海市杨浦区中心医院 Ultrasound directed texture quantitative measuring instrument and method thereof
CN104346800A (en) * 2013-08-02 2015-02-11 南京理工大学 Low-light-level image target detection method based on texture significance
CN104346800B (en) * 2013-08-02 2016-12-28 南京理工大学 A kind of twilight image object detection method based on texture significance
CN107111883A (en) * 2014-10-30 2017-08-29 皇家飞利浦有限公司 Texture analysis figure for view data
US10964036B2 (en) 2016-10-20 2021-03-30 Optina Diagnostics, Inc. Method and system for detecting an anomaly within a biological tissue
US11769264B2 (en) 2016-10-20 2023-09-26 Optina Diagnostics Inc. Method and system for imaging a biological tissue
CN108009562A (en) * 2017-10-25 2018-05-08 中山大学 A kind of hydrographic water resource feature space variability knows method for distinguishing
CN108009562B (en) * 2017-10-25 2021-07-13 中山大学 Method for identifying characteristic spatial variability of hydrology and water resource

Similar Documents

Publication Publication Date Title
CN102903116B (en) One class is based on the high spectrum image manifold dimension-reducing method of image block distance
CN103413151B (en) Hyperspectral image classification method based on figure canonical low-rank representation Dimensionality Reduction
CN103578110B (en) Multiband high-resolution remote sensing image dividing method based on gray level co-occurrence matrixes
CN103208118B (en) A kind of target in hyperspectral remotely sensed image end member extraction method
Dong et al. Selection of LiDAR geometric features with adaptive neighborhood size for urban land cover classification
CN102446278B (en) Multitemporal remote sensing image-based subpixel positioning method
CN102938148A (en) High-spectrum image texture analysis method based on V-GLCM (Gray Level Co-occurrence Matrix)
Liu et al. Study of the automatic recognition of landslides by using InSAR images and the improved mask R-CNN model in the Eastern Tibet Plateau
CN101587189B (en) Texture elementary feature extraction method for synthetizing aperture radar images
CN109241978A (en) The rapid extracting method of planar chip in ground three-dimensional laser point cloud
Zhu et al. Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features
Dong Lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns
CN103226826A (en) Method for detecting changes of remote sensing image of visual attention model based on local entropy
Su et al. Dynamic classifier selection using spectral-spatial information for hyperspectral image classification
CN102073867A (en) Sorting method and device for remote sensing images
Kong et al. A new method for building roof segmentation from airborne LiDAR point cloud data
CN106203528B (en) It is a kind of that intelligent classification algorithm is drawn based on the 3D of Fusion Features and KNN
CN109191484B (en) A method of the rapidly extracting planar chip from airborne laser radar point cloud
Cai et al. Fractal theory and its application in studying the feature of landforms
CN116843906A (en) Target multi-angle intrinsic feature mining method based on Laplace feature mapping
CN108280419B (en) Spatial feature detection method and system
CN107748875A (en) A kind of earthquake building recognition method based on multidate radar image texture feature
Yang et al. A hierarchical deep neural network with iterative features for semantic labeling of airborne LiDAR point clouds
CN104732190A (en) Synthetic aperture sonar target detection method based on orthogonal texture correlation analysis
Xiao et al. Building footprint extraction based on radiometric and geometric constraints in airborne oblique images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C05 Deemed withdrawal (patent law before 1993)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130220