WO2022141145A1 - 面向对象的高分辨率遥感影像多尺度分割方法及*** - Google Patents

面向对象的高分辨率遥感影像多尺度分割方法及*** Download PDF

Info

Publication number
WO2022141145A1
WO2022141145A1 PCT/CN2020/141206 CN2020141206W WO2022141145A1 WO 2022141145 A1 WO2022141145 A1 WO 2022141145A1 CN 2020141206 W CN2020141206 W CN 2020141206W WO 2022141145 A1 WO2022141145 A1 WO 2022141145A1
Authority
WO
WIPO (PCT)
Prior art keywords
heterogeneity
texture
image
remote sensing
features
Prior art date
Application number
PCT/CN2020/141206
Other languages
English (en)
French (fr)
Inventor
沈小乐
魏碧云
逯金辉
Original Assignee
深圳技术大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳技术大学 filed Critical 深圳技术大学
Priority to PCT/CN2020/141206 priority Critical patent/WO2022141145A1/zh
Publication of WO2022141145A1 publication Critical patent/WO2022141145A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection

Definitions

  • the invention relates to the technical field of image processing, in particular to an object-oriented high-resolution remote sensing image multi-scale segmentation method and system.
  • Remote sensing image processing is a series of operations such as radiometric correction, geometric correction, image finishing, projection transformation, mosaicking, feature extraction, classification, image analysis, and various special processing of remote sensing images in order to achieve the desired purpose.
  • object-oriented multi-scale segmentation is the premise and core of object-oriented high-resolution remote sensing image analysis. Analysis such as land object classification, target recognition, information extraction, etc. lays a good foundation.
  • object-based visual attention models In existing object-based visual attention models, objects and their hierarchies are usually given manually before the onset of visual attention, thus avoiding the problem of object definition. Therefore, there is the problem of how to determine objects and their hierarchies in object-based visual attention models.
  • the main purpose of the present invention is to provide an object-oriented high-resolution remote sensing image multi-scale segmentation method and system, which aims to solve the problem that the object-based visual attention model in the prior art has objects and their hierarchical structure in the object-based visual attention model. How to determine the problem.
  • a first aspect of the present invention provides an object-oriented high-resolution remote sensing image multi-scale segmentation method, including: acquiring remote sensing images; extracting spectral features, shape features, texture features and edge features of the remote sensing images; Calculate the heterogeneity of the spectral features, shape features and texture features respectively, and segment the image according to the heterogeneity to obtain a heterogeneity image; use the edge features to calculate the edge intensity of the remote sensing image, and generate an edge intensity map ; Combine the heterogeneity image and the edge intensity map to obtain a remote sensing image segmentation feature map.
  • a second aspect of the present application provides an object-oriented high-resolution remote sensing image multi-scale segmentation system, comprising: a remote sensing image acquisition module for acquiring remote sensing images; a feature extraction module for extracting spectral features and shapes of the remote sensing images features, texture features, and edge features; a heterogeneity image calculation module, used to calculate the heterogeneity of the spectral features, shape features, and texture features respectively, and segment the image according to the heterogeneity to obtain a heterogeneity image;
  • the edge intensity map generation module is used to calculate the edge intensity of the remote sensing image using the edge features, and generate an edge intensity map;
  • the segmentation feature map generation module is used to combine the heterogeneity image and the edge intensity map to obtain remote sensing image segmentation feature map.
  • a third aspect of the present application provides an electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when the processor executes the computer program, the above-mentioned
  • a fourth aspect of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, realizes the object-oriented high-resolution remote sensing image multi-processing described in any one of the above. Scale segmentation method.
  • the object-oriented high-resolution remote sensing image multi-scale segmentation method has the beneficial effects that: by segmenting the image on the scale of spectral feature, shape feature, texture feature and edge feature, the segmented object is determined. , so that the object hierarchical network can be constructed in the subsequent processing, so when applied to the visual attention model, the object and its hierarchical structure can be determined.
  • FIG. 1 is a schematic flowchart of an object-oriented multi-scale segmentation method for high-resolution remote sensing images according to an embodiment of the present application
  • FIG. 2 is a sample diagram for verifying an object-oriented high-resolution remote sensing image multi-scale segmentation method according to an embodiment of the present application
  • FIG. 3 is a segmentation result of a Brodatz texture synthetic image for verifying an object-oriented high-resolution remote sensing image multi-scale segmentation method according to an embodiment of the present application;
  • Fig. 4 is the segmentation result of the Brodatz texture synthetic image for verifying the object-oriented high-resolution remote sensing image multi-scale segmentation method according to the embodiment of the present application;
  • 5 is an embodiment of the present application to verify the segmentation result of the remote sensing texture composite image of the object-oriented high-resolution remote sensing image multi-scale segmentation method
  • FIG. 6 is a schematic diagram of a wrong segmentation of an object-oriented high-resolution remote sensing image multi-scale segmentation method according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of an embodiment of the present application verifying the manual segmentation of the object-oriented high-resolution remote sensing image multi-scale segmentation method
  • FIG. 8 is a schematic diagram of manual segmentation of another image for verifying an object-oriented multi-scale segmentation method for high-resolution remote sensing images according to an embodiment of the present application;
  • Fig. 9 is a segmentation result at different scales of the object-oriented high-resolution remote sensing image multi-scale segmentation method according to an embodiment of the present application, and a corresponding wrong segmentation diagram of the segmentation result of the embodiment;
  • FIG. 10 is a diagram of a segmentation experiment of the third image for verifying the object-oriented high-resolution remote sensing image multi-scale segmentation method according to the embodiment of the present application;
  • FIG. 11 is a schematic block diagram of the structure of an object-oriented multi-scale segmentation system for high-resolution remote sensing image verification according to an embodiment of the present application;
  • FIG. 12 is a schematic structural block diagram of an electronic device according to an embodiment of the present application.
  • an object-oriented multi-scale segmentation method for high-resolution remote sensing images includes: S1, acquiring remote sensing images; S2, extracting spectral features, shape features, texture features and edges of remote sensing images feature; S3. Calculate the heterogeneity of spectral features, shape features, and texture features respectively, and segment the image according to the heterogeneity to obtain a heterogeneity image; S4. Use edge features to calculate the edge intensity of remote sensing images, and generate an edge intensity map ; S5. Merge the heterogeneity image and the edge intensity map to obtain a remote sensing image segmentation feature map.
  • the segmented objects are determined, so that the object hierarchy network can be constructed in the subsequent processing. Therefore, when applied to the visual attention model, Ability to determine objects and their hierarchies.
  • merging the heterogeneity image and the edge intensity map includes: calculating the increase value of the spectral heterogeneity, the shape heterogeneity, and the texture heterogeneity; Calculate the increase value of the overall heterogeneity degree; calculate the object merging cost according to the edge intensity and the overall heterogeneity increase value; when the object merging cost is the smallest, merge the heterogeneity image and the edge intensity map
  • extracting the texture feature of the remote sensing image includes: decomposing the remote sensing image by using a non-subsampling contourlet transform to obtain subbands of multiple scales and directions; calculating each scale using a local texture energy function and the local energy of each direction sub-band, the local energy is the texture feature of the remote sensing image;
  • the texture feature calculation formula is:
  • (2n+1) ⁇ (2n+1) is the window size
  • Es ,d (x,y) is the texture feature Es ,d at the coordinates (x,y) in the image
  • f s,d is the subband
  • s represents the scale
  • d represents the direction.
  • the present invention considers that in a local window, the contribution of the NSCT coefficients to the texture feature of the center of the window is inconsistent. The farther from the center of the window, the smaller the contribution of the NSCT coefficients to the texture features, and conforms to the Gaussian distribution. Since the filter of NSCT is non-subsampled, this method can obtain a texture feature matrix consistent with the size of the original image, that is, each pixel in the image has a corresponding texture feature, which provides the structure of texture heterogeneity. Base.
  • the interior of the image object not only has heterogeneity in spectral characteristics, but when there are two different texture regions in an image object, the object also has heterogeneity in texture characteristics.
  • Existing object-oriented multi-scale segmentation methods cannot fully utilize the texture information of ground objects in remote sensing images to distinguish different texture regions in images while considering the spectral and shape characteristics of images.
  • the present invention proposes an object-oriented remote sensing image multi-scale segmentation algorithm combined with texture features.
  • the algorithm proposes the concept of "texture heterogeneity", and comprehensively considers the spectral characteristics, shape characteristics and texture characteristics of remote sensing images, improves the heterogeneity criterion of the fractal network evolution algorithm, and uses the improved merging criterion to orient the image. Multiscale segmentation of objects.
  • calculating the growth value of the texture heterogeneity includes: acquiring texture regions of different texture feature types in the remote sensing image; calculating the texture heterogeneity of the remote sensing image in each texture feature type; according to each texture heterogeneity Calculate the growth value of heterogeneity; the calculation formula of texture heterogeneity for each texture feature type is
  • s represents the scale
  • d represents the direction
  • d represents the direction
  • ⁇ s, d represents the image object at the s-th scale, d the standard deviation of the texture features in the direction;
  • the formula for calculating the heterogeneity growth value is:
  • n obj1 , n obj2 and n Merge are the object Obj1, the object Obj2 and the number of pixels of the merged object respectively, and are the standard deviation of the texture features of the object Obj1, the object Obj2, and the merged object in the s-th scale and the d-direction, respectively.
  • the method for calculating the object merging cost includes: calculating the edge merging cost according to the edge strength, and the calculation formula is:
  • EdgeIntensity(x, y) represents the edge intensity of the image at point (x, y), Common represents the set of adjacent points in the object Obj1 and the object Obj2; the object merging cost is calculated using the edge merging cost and the heterogeneity increase value, The calculation formula is:
  • f is the overall heterogeneity growth value.
  • the method for generating an edge intensity map includes: performing a first-order differential or a second-order differential on a remote sensing image to obtain a gradient of the remote sensing image; performing convolution of Gaussian filtering on the remote sensing image to obtain a smooth image; The image calculates the edge intensity and normal vector of the remote sensing image, and obtains the preliminary edge intensity map; uses the non-maximum suppression zoom to refine the edge intensity preliminary map, and obtains the edge intensity map of the remote sensing image.
  • the edge-based segmentation method uses the discontinuity of gray levels between different regions of the image to perform segmentation by finding the boundaries between regions.
  • object-oriented image segmentation once the edge of the image object is determined, the shape of the object is also determined.
  • image edge detection is a crucial step.
  • the purpose of the edge detection algorithm is to detect the pixel points (edge points) that have a step change or a roof-like change in the pixel gray level in the image.
  • the algorithm generally uses the characteristics of the first-order or second-order derivative of the grayscale change of the edge point, and extracts the edge point by performing the first-order or second-order differential on the image.
  • the edge intensity M and the normal vector ⁇ of the image at the point (x, y) calculated by the Canny algorithm are:
  • the edge merging cost when object Obj1 and object Obj2 are merged is defined as follows:
  • EdgeIntensity(x, y) represents the edge intensity of the image at point (x, y)
  • Common represents the set of adjacent points in the object Obj1 and the object Obj2, which can be called adjacent edges.
  • the merging cost f generated by merging the image object Obj1 and the image object Obj2 into the image object Obj Merge is defined as the weighted sum of the increase value of the object heterogeneity and the edge merging cost of the object, ie formula (7).
  • the criterion of "minimum object merging cost" is adopted as the region merging strategy, and the edge merging cost and the heterogeneity criterion are combined to form a new merging criterion.
  • Object Multiscale Segmentation Algorithm The algorithm not only comprehensively considers the spectrum, shape, texture and other characteristics of the image object, but also makes full use of the edge features of the ground objects in the image, and the location of the edge of the ground objects in the image will be more accurate.
  • the object-oriented high-resolution remote sensing image multi-scale segmentation method provided by the embodiment of the present application is verified below.
  • three sets of data are used to perform experiments on the algorithm.
  • five different textures in the Brodatz standard texture library were selected and synthesized into texture images, as shown in Figure 2(a).
  • Experiment 2 selected two typical textures in remote sensing images: built-up area and woodland, and synthesized them into texture images, as shown in Figure 2(b).
  • the remote sensing texture area is selected from the IKONOS satellite remote sensing image, and the spatial resolution of the image is 1 meter.
  • Experiment 3 selected a part of a remote sensing image from the ZY-3 satellite in Wuhan area, and the spatial resolution of the image was 2.1 meters, as shown in Figure 2(c).
  • the multi-scale segmentation algorithm (multiresolution segmentation algorithm) in eCognition Developer 8.8 software (hereinafter referred to as eCognition software) is selected.
  • eCognition software eCognition software
  • Figure 3 shows the segmentation results of the Brodatz texture composite image.
  • Figure 3(a) and Figure 3(b) are the segmentation results at different scales using eCognition software.
  • the parameters of Fig. 3(a) are set as scale, weight factor.
  • the parameters of Fig. 3(b) are set as scale, weight factor.
  • Figure 3(c) is the segmentation result of the algorithm proposed in this embodiment, and the parameters are set as scale and weight factor. It can be seen from FIG. 3 that the algorithm proposed in this embodiment divides 5 different texture types into 5 independent and complete regions.
  • the segmentation results of the eCognition software failed to obtain a complete texture area at different scales, and there was a serious mis-segmentation at the boundary of the texture area.
  • Figure 4 shows the segmentation result of the remote sensing texture synthesis image.
  • Figure 4(a) and Figure 4(b) are the segmentation results at different scales using eCognition software.
  • the parameters of Fig. 4(a) are set as scale, weight factor.
  • the parameters of Fig. 4(b) are set as scale, weight factor.
  • Fig. 4(c) is the segmentation result of the algorithm proposed in this embodiment, and the parameters are set as scale and weight factor. It can be seen from Figure 4 that the algorithm proposed in this embodiment can better distinguish two different remote sensing textures of built-up area and forest land, and divide the image into 4 independent and complete areas.
  • the segmentation results of the eCognition software failed to obtain a complete texture area at different scales, and there were obvious mis-segmentation at the boundary of the texture area.
  • Figure 5 shows the segmentation result of the remote sensing texture synthesis image.
  • Figure 5(a) and Figure 5(b) are the segmentation results at different scales using eCognition software.
  • the parameters of Fig. 5(a) are set as scale, weight factor.
  • the parameters of Fig. 5(b) are set as scale, weight factor.
  • Figure 5(c) is the segmentation result of the algorithm proposed in this embodiment, and the parameters are set as scale and weight factor. It can be seen from Figure 5 that the algorithm proposed in this embodiment can better distinguish two different remote sensing textures of built-up area and forest land, and divide the image into 4 independent and complete areas.
  • the segmentation results of the eCognition software failed to obtain a complete texture area at different scales, and there were obvious mis-segmentation at the boundary of the texture area.
  • the evaluation of image segmentation methods is a very important part.
  • the existing segmentation evaluation methods can be divided into two types: unsupervised evaluation and supervised evaluation.
  • the unsupervised evaluation method generally evaluates the segmentation results by calculating the homogeneity within the object and the difference between the objects in the image segmentation result. It is believed that the higher the homogeneity within the object and the greater the difference between objects, the better the segmentation result.
  • this evaluation method is not suitable for images with rich texture information, because the rich texture information in the image will reduce the spectral homogeneity within the object and the spectral difference between objects in the segmentation result. Be objective.
  • the supervised evaluation method Compared with the unsupervised evaluation method that does not consider the characteristics of the image itself, the supervised evaluation method firstly performs manual segmentation on the image according to the prior knowledge to obtain the reference segmentation result, and evaluates the goodness of the segmentation result by calculating the degree of agreement between the reference result and the algorithm segmentation result. Bad, relatively more objective.
  • the "pixel number error" is used to evaluate the segmentation results of very high spatial resolution satellite remote sensing images.
  • the algorithm regards the process of image segmentation as a pixel-level classification process for images, and evaluates the segmentation accuracy through two indicators: Mis-segment Ratio (MR) and Regions Ratio (RR).
  • Mis-segmentation rate is the ratio of mis-segmented pixels in the image.
  • the segmentation result and the reference result are superimposed, and for each area in the segmentation result, the area with the largest overlapping area in the reference result is considered to be the correct segmentation result of the area (that is, it is considered that the pixels in the segmentation area should be classified as the reference category. ), and pixels that are not in the correct segmented area are mis-segmented pixels.
  • Fig. 6 is a schematic diagram of wrong segmentation, wherein Fig. 6(a) is the reference segmentation result, Fig. 6(b) is the algorithm segmentation result, then Fig. 6(c) is the algorithm error segmentation diagram, wherein the black area is the wrong segmentation part.
  • mis-segmentation rate The lower the mis-segmentation rate, the higher the overall accuracy of image segmentation, and vice versa.
  • the formula for calculating the mis-segmentation rate is as follows:
  • the ratio of the number of regions is used to comprehensively evaluate whether there is over-segmentation or under-segmentation in image segmentation.
  • the number of regions ratio is the ratio of the number of regions in the segmentation result to the number of regions in the reference result.
  • the ratio of the number of regions is greater than 1, indicating that the segmentation result is over-segmented.
  • the ratio of the number of regions is less than 1, indicating that the segmentation result is under-segmented.
  • the formula for calculating the area ratio is as follows:
  • the method of supervision and evaluation is adopted, and based on the error of the number of pixels, the accuracy of the image segmentation result is evaluated by two indicators: the wrong segmentation rate and the ratio of the number of regions.
  • three groups of experimental data are manually segmented.
  • the (a) column is the reference segmentation result of the three groups of experimental data
  • the (b) and (c) columns are the wrong segmentation results of the eCognition software segmentation results of the three groups of experimental data
  • the (d) column is this The wrong segmentation map of the segmentation result of the algorithm proposed in the embodiment, wherein the white part is the wrong segmentation pixel.
  • Table 1 evaluates the segmentation results of the three groups of experiments. It can be seen from Table 1 that the segmentation results of the algorithm proposed in this embodiment have lower mis-segmentation rates in the three groups of experiments than the segmentation results obtained by using the eCognition software.
  • the algorithm proposed in this embodiment segmented the synthetic image to obtain the same number of regions as the reference segmentation result, and the region number ratio was 1.
  • the segmentation results of eCognition software are more fragmented, the number of regions is relatively high, and there is a phenomenon of over-segmentation.
  • the algorithm of this embodiment divides the building area in the image into two areas, while the eCognition software divides the building area into multiple areas, and the area-to-number ratio of the segmentation results is higher than that of the algorithm of this embodiment.
  • the eCognition software has different degrees of mis-segmentation at the boundary of the image texture area. In the latter two groups of experiments, for the more complex texture areas of the building area, the eCognition software has over-segmentation at different scales. .
  • the traditional object-oriented multi-scale segmentation algorithm does not fully consider the texture information of the ground objects in the image, so the Over-segmentation occurs when images with rich texture information are segmented.
  • the object-oriented multi-scale segmentation algorithm combined with texture features proposed in this embodiment takes into account the spectral features and shape features of the image, and adds texture features to the merging criteria, which has a better segmentation effect for images with rich texture information, and its segmentation accuracy is excellent. on traditional algorithms.
  • the purpose of the first set of experiments is to verify the integrity of the objects segmented by the algorithm of this embodiment and the positioning accuracy of the edges of the objects.
  • a satellite remote sensing image of the town of Milton Keynes in the United Kingdom is selected, as shown in Fig. 8(a), and the image acquisition time is 2005.
  • the area covered by the image is a corner of a plant maze, mainly including grass and roads.
  • the image is manually segmented in this embodiment, and 18 areas in total are extracted from 8 roads and 10 grasslands separated by roads.
  • the reference segmentation result is shown in Figure 8(b).
  • the multi-scale segmentation algorithm in the eCognition software is selected as a comparative experiment.
  • FIG. 9 shows the segmentation result of the first experiment image.
  • Figure 9(a) and Figure 9(b) are the segmentation results at different scales using eCognition software.
  • the parameters of Fig. 9(a) are set as scale, weight factor.
  • the parameters of Fig. 9(b) are set as scale, weight factor.
  • Fig. 9(c) is the segmentation result of the algorithm proposed in this embodiment, and the parameters are set as scale and weight factor.
  • Figures 9(d) to 9(f) are the wrong segmentation diagrams corresponding to the segmentation results of the eCognition software and the algorithm of this embodiment, respectively.
  • the algorithm proposed in this embodiment divides the 8 roads and 10 grass fields in the image into independent and complete objects.
  • the software segments the road and grass in the image into several objects, but fails to segment the complete features, as shown in Figure 9(a).
  • the shape weight factor is small, the road objects in the segmentation result have some "burrs" at different positions due to the weakening of the algorithm's constraints on the shape of the ground objects, as shown in Figure 9(b).
  • the algorithm proposed in this embodiment makes full use of the edge features of the objects in the remote sensing images, so that the edge of the objects can be positioned more accurately.
  • the accuracy of the image segmentation result is evaluated by two indicators: the wrong segmentation rate and the ratio of the number of regions.
  • Table 2 shows the segmentation accuracy of different algorithms for Experiment 1. It can be seen from the table that the ratio of the number of regions in the segmentation results of the eCognition software at different scales is greater than 1, and there are different degrees of over-segmentation. When the scale is used, the area ratio of the eCognition software segmentation result is closer to 1, but due to the lack of shape constraints, the location of the edge of the object is inaccurate, and the wrong segmentation rate is slightly higher than the result of the scale.
  • the algorithm proposed in this embodiment divides the area with similar features into independent and complete objects, the area number ratio is equal to 1, and the wrong segmentation rate is lower than the segmentation results of the eCognition software at different scales.
  • the purpose of the second set of experiments is to verify the boundary positioning accuracy of the algorithm proposed in this embodiment for regions with rich texture information in the image.
  • each sub-band cannot satisfy the optimal resolution in both the frequency domain and the spatial domain. That is, when the resolution in the frequency domain is higher, the corresponding resolution in the spatial domain is lower, and vice versa. Therefore, when using methods based on time-frequency analysis to describe image texture features, it is often impossible to precisely locate the boundaries of texture regions. This requires the introduction of other features of the feature, such as edge features.
  • FIG. 10( a ) is the segmentation result of the Brodatz texture
  • FIG. 10( b ) is the result of the object-oriented segmentation algorithm based on the edge merging cost criterion proposed in this embodiment.
  • Figure 10(c) and Figure 10(d) are the corresponding mis-segmentation diagrams, respectively.
  • this embodiment proves the effectiveness of the object-oriented multi-scale segmentation algorithm based on the edge merging cost criterion.
  • the algorithm uses the spectral features, shape features and texture features of remote sensing images, and takes into account the edge features of the images.
  • the segmentation results can more accurately locate the boundaries of objects, and its segmentation accuracy is better than the traditional algorithm.
  • an object-oriented high-resolution remote sensing image multi-scale segmentation system includes: a remote sensing image acquisition module 1, a feature extraction module 2, a heterogeneity image calculation module 3, an edge intensity The image generation module 4 and the segmentation feature map generation module 5;
  • the remote sensing image acquisition module 1 is used to obtain remote sensing images;
  • the feature extraction module 2 is used to extract the spectral features, shape features, texture features and edge features of the remote sensing images;
  • Module 3 is used to calculate the heterogeneity of spectral features, shape features and texture features respectively, and segment the image according to the heterogeneity to obtain a heterogeneity image;
  • the edge intensity map generation module 4 is used to calculate the edge intensity of remote sensing images using edge features , and generate an edge intensity map;
  • the segmentation feature map generation module 5 is used to combine the heterogeneity image and the edge intensity map to obtain a remote sensing image segmentation feature map.
  • the segmentation feature map generation module 5 includes: a heterogeneity increase value calculation unit, an overall heterogeneity increase value calculation unit, an object merging cost calculation unit and a merge unit; the heterogeneity increase value calculation unit is used to calculate The growth value of spectral heterogeneity, shape heterogeneity, and texture heterogeneity; the calculation unit of overall heterogeneity growth value is used to calculate the overall heterogeneity according to the growth value of spectral heterogeneity, shape heterogeneity, and texture heterogeneity.
  • the object merging cost calculation unit is used to calculate the object merging cost according to the edge intensity and the overall heterogeneity increase value; the merging unit is used to merge the heterogeneity image and the edge intensity map when the object merging cost is the smallest.
  • the heterogeneity image calculation module 3 includes: a sub-band acquisition unit and a texture feature calculation unit; the sub-band acquisition unit is used to decompose the remote sensing image by using a non-subsampling contourlet transform to obtain multiple scales and subbands in multiple directions; the texture feature calculation unit is used to calculate the local energy of each scale and each direction subband using the local texture energy function, and the local energy is the texture feature of the remote sensing image.
  • the texture feature calculation formula is:
  • (2n+1) ⁇ (2n+1) is the window size
  • Es ,d (x,y) is the texture feature Es ,d at the coordinates (x,y) in the image
  • f s,d represents Subband
  • s is the scale
  • d is the direction.
  • the heterogeneity increase value calculation unit includes: a texture area acquisition subunit, a texture heterogeneity acquisition subunit, and a heterogeneity increase value calculation subunit; the texture area acquisition subunit is used to acquire different The texture area of the texture feature type; the texture heterogeneity acquisition subunit is used to calculate the texture heterogeneity of the remote sensing image in each texture feature type; the heterogeneity increase value calculation subunit is used to calculate the heterogeneity according to each texture heterogeneity. Quality growth value.
  • s represents the scale
  • d represents the direction
  • d represents the direction
  • ⁇ s, d represents the image object at the s-th scale, d the standard deviation of the texture features in the direction;
  • the formula for calculating the heterogeneity growth value is:
  • n obj1 , n obj2 and n Merge are the object Obj1, the object Obj2 and the number of pixels of the merged object respectively, and are the standard deviation of the texture features of the object Obj1, the object Obj2, and the merged object in the s-th scale and the d-direction, respectively.
  • the overall heterogeneity increase value calculation unit includes: a factor acquisition subunit and an increase value calculation subunit; the factor acquisition subunit is used to obtain a spectral weight factor, a shape weight factor, and a texture weight factor; an increase value calculation subunit The unit is used to calculate the overall heterogeneity growth value using the spectral weight factor, the shape weight factor, the texture weight factor, and the growth values of the spectral heterogeneity, the shape heterogeneity, and the texture heterogeneity.
  • the object merging cost computing unit includes: an edge merging cost computing subunit and a merging subunit, the edge merging cost computing subunit is used to calculate the edge merging cost according to the edge strength, and the merging subunit is used to use the edge merging cost and
  • the heterogeneity growth value calculates the object merging cost.
  • EdgeCost(Obj1, Obj2) ⁇ (x,y) ⁇ Common EdgeIntensity(x,y), EdgeIntensity(x,y) represents the edge intensity of the image at point (x,y), Common Represents the set of points adjacent to each other in the object Obj1 and the object Obj2.
  • the edge intensity map generation module 4 includes: a differentiation unit, a convolution unit, an edge intensity preliminary map generation unit, and an edge refinement unit; the differentiation unit is used to perform first-order or second-order differentiation on the remote sensing image to obtain The gradient of the remote sensing image; the convolution unit is used to perform convolution of Gaussian filtering on the remote sensing image to obtain a smooth image; the edge intensity preliminary map generation unit is used to calculate the edge intensity and normal vector of the remote sensing image according to the gradient and smooth image, and obtain the edge intensity Preliminary map; the like unit is used to refine the preliminary map of edge intensity using non-maximum-suppressed magnification to obtain the edge intensity map of remote sensing images.
  • the electronic device includes: a memory 601, a processor 602, and a computer program stored in the memory 601 and running on the processor 602, and the processor 602 executes the computer When the program is executed, the object-oriented high-resolution remote sensing image multi-scale segmentation method described above is implemented.
  • the electronic device further includes: at least one input device 603 and at least one output device 604 .
  • the above-mentioned memory 601 , processor 602 , input device 603 and output device 604 are connected through a bus 605 .
  • the input device 603 may specifically be a camera, a touch panel, a physical button, a mouse, or the like.
  • the output device 604 may specifically be a display screen.
  • the memory 601 may be a high-speed random access memory (RAM, Random Access Memory) memory, or may be a non-volatile memory (non-volatile memory), such as a disk memory.
  • RAM Random Access Memory
  • non-volatile memory such as a disk memory.
  • Memory 601 is used to store a set of executable program codes, and processor 602 is coupled to memory 601 .
  • an embodiment of the present application further provides a computer-readable storage medium, which may be provided in the electronic device in each of the foregoing embodiments, and the computer-readable storage medium may be the foregoing memory 601.
  • a computer program is stored on the computer-readable storage medium, and when the program is executed by the processor 602, the object-oriented multi-scale segmentation method for high-resolution remote sensing images described in the foregoing embodiments is implemented.
  • the computer-storable medium may also be a USB flash drive, a removable hard disk, a read-only memory 601 (ROM, Read-Only Memory), a RAM, a magnetic disk, or an optical disk and other media that can store program codes.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disk a magnetic disk
  • optical disk an optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种面向对象的高分辨率遥感影像多尺度分割方法,包括:获取遥感影像;提取所述遥感影像的光谱特征、形状特征、纹理特征及边缘特征;分别计算所述光谱特征、形状特征、纹理特征的异质度,并根据所述异质度分割图像,得到异质度图像;使用所述边缘特征计算遥感影像的边缘强度,并生成边缘强度图;合并所述异质度图像及边缘强度图,得到遥感图像分割特征图;通过在光谱特征、形状特征、纹理特征及边缘特征的尺度上对图像进行分割,确定了分割后的对象,从而能够在后续的处理中构造对象层次网络,因此在应用于视觉注意模型中时,能够确定对象及其层次结构。

Description

面向对象的高分辨率遥感影像多尺度分割方法及*** 技术领域
本发明涉及影像处理技术领域,尤其涉及一种面向对象的高分辨率遥感影像多尺度分割方法及***。
背景技术
遥感影像处理是对遥感图像进行辐射校正、几何校正、图像整饰、投影变换、镶嵌、特征提取、分类、影像分析以及各种专题处理等的系列操作,以求达到预期目的的技术。
在对遥感影像进行影像分析的操作中,面向对象的多尺度分割作为一项关键技术,是面向对象的高分辨率遥感影像分析的前提和核心,好的影像分割结果可以为后续更深入的影像分析如地物分类、目标识别、信息提取等打下良好的基础。
在现有的基于对象的视觉注意模型中,对象及其层次结构通常在视觉注意开始之前就被人工给定,从而回避了对象的定义问题。因此存在基于对象的视觉注意模型中对象及其层次结构如何确定的问题。
发明内容
本发明的主要目的在于提供一种面向对象的高分辨率遥感影像多尺度分割方法及***,旨在解决现有技术中基于对象的视觉注意模型存在基于对象的视觉注意模型中对象及其层次结构如何确定的问题。
为实现上述目的,本发明第一方面提供一种面向对象的高分辨率遥感影像多尺度分割方法,包括:获取遥感影像;提取所述遥感影像的光谱特征、形状特征、纹理特征及边缘特征;分别计算所述光谱特征、形状特征、纹理特征的 异质度,并根据所述异质度分割图像,得到异质度图像;使用所述边缘特征计算遥感影像的边缘强度,并生成边缘强度图;合并所述异质度图像及边缘强度图,得到遥感图像分割特征图。
本申请第二方面提供一种面向对象的高分辨率遥感影像多尺度分割***,包括:遥感图像获取模块,用于获取遥感图像;特征提取模块,用于提取所述遥感影像的光谱特征、形状特征、纹理特征及边缘特征;异质度图像计算模块,用于分别计算所述光谱特征、形状特征、纹理特征的异质度,并根据所述异质度分割图像,得到异质度图像;边缘强度图生成模块,用于使用所述边缘特征计算遥感影像的边缘强度,并生成边缘强度图;分割特征图生成模块,用于合并所述异质度图像及边缘强度图,得到遥感图像分割特征图。
本申请第三方面提供一种电子装置,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现上述中的任意一项所述的面向对象的高分辨率遥感影像多尺度分割方法。
本申请第四方面提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现上述中的任意一项所述的面向对象的高分辨率遥感影像多尺度分割方法。
本发明提供的一种面向对象的高分辨率遥感影像多尺度分割方法,有益效果在于:通过在光谱特征、形状特征、纹理特征及边缘特征的尺度上对图像进行分割,确定了分割后的对象,从而能够在后续的处理中构造对象层次网络,因此在应用于视觉注意模型中时,能够确定对象及其层次结构。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创 造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例面向对象的高分辨率遥感影像多尺度分割方法的流程示意图;
图2为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的样本图;
图3为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的Brodatz纹理合成影像的分割结果;
图4为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的Brodatz纹理合成影像的分割结果;
图5为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的遥感纹理合成影像的分割结果;
图6为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的错分割示意图;
图7为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的人工分割的示意图;
图8为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的另一幅图像的人工分割示意图;
图9为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的在不同尺度下的分割结果、实施例的分割结果机器二者的对应错分割图;
图10为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割方法的第三幅图像的分割实验图;
图11为本申请实施例验证面向对象的高分辨率遥感影像多尺度分割***的结构示意框图;
图12为本申请实施例电子装置的结构示意框图。
具体实施方式
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,为本实施例提供的一种面向对象的高分辨率遥感影像多尺度分割方法,包括:S1、获取遥感影像;S2、提取遥感影像的光谱特征、形状特征、纹理特征及边缘特征;S3、分别计算光谱特征、形状特征、纹理特征的异质度,并根据异质度分割图像,得到异质度图像;S4、使用边缘特征计算遥感影像的边缘强度,并生成边缘强度图;S5、合并异质度图像及边缘强度图,得到遥感图像分割特征图。
通过在光谱特征、形状特征、纹理特征及边缘特征的尺度上对图像进行分割,确定了分割后的对象,从而能够在后续的处理中构造对象层次网络,因此在应用于视觉注意模型中时,能够确定对象及其层次结构。
在一个实施例中,合并异质度图像及边缘强度图包括:计算光谱异质度、形状异质度、纹理异质度的增长值;根据光谱异质度、形状异质度、纹理异质度的增长值计算总体异质度增长值;根据边缘强度及总体异质度增长值计算对象合并代价;在对象合并代价最小时,对异质度图像及边缘强度图进行合并
在一个实施例中,提取遥感影像的纹理特征包括:对遥感图像使用非下采样轮廓波变换的方式进行分解,得到多个尺度和多个方向的子带;使用局部纹理能量函数计算每个尺度和每个方向子带的局部能量,局部能量为遥感影像的纹理特征;
纹理特征计算公式为:
Figure PCTCN2020141206-appb-000001
Figure PCTCN2020141206-appb-000002
Figure PCTCN2020141206-appb-000003
(2n+1)×(2n+1)为窗口大小,E s,d(x,y)为影像中坐标为(x,y)处的纹理特征E s,d,f s,d表示子带,s表示尺度,d表示方向。
上式中,本发明认为在一个局部窗口中,NSCT系数对窗口中心纹理特征的贡献大小是不一致的。距离窗口中心越远,NSCT系数对纹理特征的贡献越小,并符合高斯分布。由于NSCT的滤波器是非下采样的,所以通过此方法可以得到和原始影像尺寸一致的纹理特征矩阵,即影像中的每一个像素均具有对应的纹理特征,这为纹理异质度的构造提供了基础。
在本实施例中,影像对象内部不仅在光谱特征上具有异质性,当一个影像对象内存在两种不同的纹理区域时,该对象在纹理特征上同样具有异质性。现有的面向对象的多尺度分割方法无法在顾及影像光谱和形状特征的同时,充分利用遥感影像中地物的纹理信息以区分影像中不同的纹理区域。基于S1中提出的遥感影像纹理特征描述方法,本发明提出了一种结合纹理特征的面向对象遥感影像多尺度分割算法。算法提出了“纹理异质度”的概念,并综合考虑了遥感影像的光谱特征、形状特征和纹理特征,改进了分形网络演化算法的异质度准则,通过改进后的合并准则对影像进行面向对象的多尺度分割。
在一个实施例中,计算纹理异质度的增长值包括:获取遥感图像内不同纹理特征种类的纹理区域;计算遥感影像在每个纹理特征种类的纹理异质度;根据每个纹理异质度计算异质度增长值;每个纹理特征种类的纹理异质度计算公式为
Figure PCTCN2020141206-appb-000004
s表示尺度,d表示方向,ω s,d表示第s尺度、d表方向上的纹理特征的权 重因子,0≤ω s,d≤1,σ s,d表示影像对象在第s尺度、d方向上的纹理特征的标准差;
异质度增长值计算公式为:
Figure PCTCN2020141206-appb-000005
n obj1,n obj2和n Merge分别为对象Obj1,对象Obj2以及合并后对象的像素数,
Figure PCTCN2020141206-appb-000006
Figure PCTCN2020141206-appb-000007
分别为对象Obj1,对象Obj2以及合并后对象在第s尺度、d方向上的纹理特征的标准差。
在一个实施例中,对象合并代价的计算方法包括:根据边缘强度计算边缘合并代价,计算公式为
Figure PCTCN2020141206-appb-000008
EdgeIntensity(x,y)表示影像在点(x,y)处的边缘强度,Common表示对象Obj1和对象Obj2中相互邻接的点的集合;使用边缘合并代价及异质度增长值计算对象合并代价,计算公式为:
F=f+ω edge.EdgeCost(Obj1,Obj2)(7)
其中,f为总体异质度增长值。
在一个实施例中,边缘强度图的生成方法包括:对遥感影像进行一阶微分或二阶微分,得到遥感影像的梯度;对遥感影像进行高斯滤波的卷积,得到平滑影像;根据梯度及平滑影像计算遥感影像的边缘强度及法向量,得到边缘强度初步图;使用非极大值抑制的放大对边缘强度初步图进行细化处理,得到遥感影像的边缘强度图。
在一幅影像中,不同类别的区域之间往往具有明显的边界,在边界处存在不连续的灰度变化,即构成了影像中的边缘。而基于边缘的分割方法就是利用影像不同区域间的灰度级的不连续性,通过寻找区域间的边界来进行分割的。 在面向对象的影像分割中,一旦确定了影像对象的边缘,则对象的形状也随之确定。
在基于边缘的分割方法中,对影像的边缘检测是至关重要的步骤。边缘检测算法的目的是检测出影像中像素灰度有阶跃变化或屋顶状变化的像素点(边缘点)。算法一般利用边缘点灰度变化的一阶或二阶导数的特点,通过对影像进行一阶微分或二阶微分来提取边缘点。
对影像的一阶微分得到影像的梯度
Figure PCTCN2020141206-appb-000009
影像在点(x,y)的梯度定义如下:
Figure PCTCN2020141206-appb-000010
其幅度M和方向θ分别为
Figure PCTCN2020141206-appb-000011
Figure PCTCN2020141206-appb-000012
再使用Canny算法采用的高斯滤波器
Figure PCTCN2020141206-appb-000013
对影像f(x,y)进行卷积得到平滑后影像f s(x,y):
f s(x,y)=G(x,y)*f(x,y)#(11)
根据式(8)和(9),Canny算法计算得到的影像在点(x,y)处的边缘强度M和法向量θ分别为:
Figure PCTCN2020141206-appb-000014
Figure PCTCN2020141206-appb-000015
使用非极大值抑制的方法对边缘强度图进行细化处理,将细化后的结果作为影像的边缘强度图。
在影像对象的合并过程中,不仅存在因合并导致的对象内部光谱、纹理等 特征的异质度增长,如果在两个相邻对象的公共边界处存在不连续的灰度变化(即边缘),那么在合并时会产生相应的“边缘合并代价”。
对象Obj1和对象Obj2合并时的边缘合并代价定义如下:
Figure PCTCN2020141206-appb-000016
式中EdgeIntensity(x,y)表示影像在点(x,y)处的边缘强度,Common表示对象Obj1和对象Obj2中相互邻接的点的集合,可称作邻接边缘。
影像对象Obj1和影像对象Obj2合并为影像对象Obj Merge所产生的合并代价f定义为对象异质度的增长值和对象的边缘合并代价的加权和,即公式(7)。
在自底向上的区域合并时,采用“对象合并代价最小”准则作为区域合并策略,将边缘合并代价和异质度准则相结合形成新的合并准则,提出了基于边缘合并代价准则的遥感影像面向对象多尺度分割算法。算法不仅综合考虑了影像对象的光谱、形状、纹理等特征,并且充分利用了影像中地物的边缘特征,对影像中地物边缘的定位将更加精确。
以下对本申请实施例提供的面向对象的高分辨率遥感影像多尺度分割方法进行验证,在本实施例中,采用了三组数据对算法进行实验。实验一选取了Brodatz标准纹理库中的5种不同纹理合成为纹理影像,如图2(a)。实验二选取了遥感影像中的两种典型纹理:建筑区和林地,合成为纹理影像,如图2(b)。其中遥感纹理区域选取自IKONOS卫星遥感影像,影像空间分辨率为1米。实验三选取了一幅武汉地区的资源三号(ZY-3)卫星遥感影像的一部分,影像空间分辨率为2.1米,如图2(c)。
为了分析本实施例所提算法和分形网络演化算法在对具有丰富纹理信息的影像进行分割时的效果,选用了eCognition Developer 8.8软件(以下简称eCognition软件)中的多尺度分割算法(multiresolution segmentation algorithm)作为对比实验,并对三组实验结果进行了比较和分析。
图3为Brodatz纹理合成影像的分割结果。其中图3(a)和图3(b)为使用eCognition软件在不同尺度下的分割结果。图3(a)的参数设置为尺度,权重因子。图3(b)的参数设置为尺度,权重因子。图3(c)为本实施例所提算法分割结果,参数设置为尺度,权重因子。从图3中可以看出,本实施例所提算法将5种不同的纹理类型分割为5块独立完整的区域。而eCognition软件的分割结果在不同尺度下均未能得到完整的纹理区域,在纹理区域的边界处存在较严重的错分割。
图4为遥感纹理合成影像的分割结果。其中图4(a)和图4(b)为使用eCognition软件在不同尺度下的分割结果。图4(a)的参数设置为尺度,权重因子。图4(b)的参数设置为尺度,权重因子。图4(c)为本实施例所提算法分割结果,参数设置为尺度,权重因子。从图4中可以看出,本实施例所提算法可以较好地区分出建筑区和林地两种不同遥感纹理,并将影像分割为4块独立完整的区域。而eCognition软件的分割结果在不同尺度下均未能得到完整的纹理区域,在纹理区域的边界处亦存在较明显的错分割。
图5为遥感纹理合成影像的分割结果。其中图5(a)和图5(b)为使用eCognition软件在不同尺度下的分割结果。图5(a)的参数设置为尺度,权重因子。图5(b)的参数设置为尺度,权重因子。图5(c)为本实施例所提算法分割结果,参数设置为尺度,权重因子。从图5中可以看出,本实施例所提算法可以较好地区分出建筑区和林地两种不同遥感纹理,并将影像分割为4块独立完整的区域。而eCognition软件的分割结果在不同尺度下均未能得到完整的纹理区域,在纹理区域的边界处亦存在较明显的错分割。
对影像分割方法的评价是一个十分重要的环节。目前已有的分割评价方法可以分为非监督评价和监督评价两种方式。其中非监督评价方法一般通过计算影像分割结果中对象内部的同质性和对象间的差异性来评价分割结果,认为对象内部同质性越高、对象间差异性越大的分割结果越好。但是这种评价方式对 于纹理信息丰富的影像并不适用,因为影像中丰富的纹理信息会导致分割结果对象内部光谱同质性降低,对象间的光谱差异减小,而使用此种评价指标将不再客观。
相对于不考虑影像自身特点的非监督评价方式,监督评价方式首先根据先验知识对影像进行人工分割,得到参考分割结果,并通过计算参考结果和算法分割结果的吻合程度来评价分割结果的好坏,相对更为客观。“像素数量误差”用于评价甚高空间分辨率卫星遥感影像的分割结果。算法将影像分割的过程视作一个对影像的像素级分类过程,并通过错分割率(Mis-segement Ratio,MR)和区域数比(Regions Ratio,RR)两个指标来评价分割精度。
错分割率为错分割像素在影像中所占的比率。将分割结果与参考结果叠加,针对分割结果中每一块区域,认为在参考结果中与其重叠面积最大的区域是该区域的正确分割结果(即认为该分割区域中的像素应被分类为该参考类别),不在正确分割区域中的像素则为错分割像素。图6为错分割示意图,其中图6(a)为参考分割结果,图6(b)为算法分割结果,则图6(c)为算法错分割图,其中黑色区域为错分割部分。
错分割率越低,则影像分割的整体精度越高,反之亦然。错分割率的计算公式如下:
Figure PCTCN2020141206-appb-000017
式中,表示参考类别为j的像素被分为i类的像素总数,表示参考结果区域数(即参考类别数)。
同时,采用区域数比综合评价影像分割中是否存在过分割(over-segmentation)或欠分割(under-segmentation)的现象。区域数比是分割结果的区域数与参考结果区域数的比值。区域数比大于1,说明分割结果存在 过分割现象。区域数比小于1,说明分割结果存在欠分割现象。区域数比越接近1,说明分割结果越好。区域数比的计算公式如下:
Figure PCTCN2020141206-appb-000018
本本实施例采用监督评价的方式,基于像素数量误差,通过错分割率和区域数比两项指标评价影像分割结果的精度。根据先验知识,本实施例对三组实验数据进行了人工分割。图7中第(a)列为三组实验数据的参考分割结果,第(b)、(c)两列为三组实验数据的eCognition软件分割结果的错分割图,第(d)列为本实施例所提算法分割结果的错分割图,其中白色部分为错分割像素。
表1对三组实验的分割结果进行了评价。从表1中可看出,本实施例所提算法的分割结果在三组实验中的错分割率均低于使用eCognition软件得到的分割结果。在第一及第二组实验中,本实施例所提算法对合成影像进行分割得到了和参考分割结果相同的区域数,区域数比为1。而eCognition软件的分割结果更加细碎,区域数比相对较高,存在过分割的现象。在第三组实验中,本实施例算法将影像中的建筑区分割为2块区域,而eCognition软件将建筑区分割为多块区域,其分割结果的区域数比高于本本实施例算法。
表1 三组实验结果的精度评价
Figure PCTCN2020141206-appb-000019
值得一提的是,如果影像对象内部纹理特征较为平稳且规则,那么虽然影像对象的光谱异质度较高(光谱标准差较大),但是上述公式可知,两个具有同质纹理的影像对象在合并时的光谱异质度增长值是很小的。这也正是 eCognition软件在较大尺度上可以将实验一中的左侧和上侧纹理区域以及实验二中的林地纹理区域单独分割出来的原因。实际上,在基于统计的纹理描述方法中,纹理区域的灰度直方图、灰度均值、标准差等统计量均可作为纹理特征描述子。标准差表征了光谱变化的剧烈程度,可以以此描述纹理的强度。但是,对于在不同尺度或不同方向上表现不同的复杂纹理而言,仅用标准差是无法对它们进行区分的。因此在三组实验中eCognition软件在影像纹理区域的边界处均存在不同程度的错分割,而在后两组实验中对于较复杂的建筑区纹理区域,eCognition软件在不同尺度上均存在过分割现象。
综上所述,由于影像中具有丰富纹理信息的区域在光谱特征上表现出不同的光谱异质性,而传统的面向对象多尺度分割算法并未充分考虑影像中地物的纹理信息,故而在对具有丰富纹理信息的影像进行分割时会产生过分割现象。本实施例提出的结合纹理特征的面向对象多尺度分割算法在顾及影像光谱特征和形状特征的同时,将纹理特征加入合并准则,对纹理信息丰富的影像具有更好的分割效果,其分割精度优于传统算法。
为了验证本本实施例所提算法的有效性,本本实施例采用了两组实验对算法进行比较分析。
第一组实验的目的是验证本实施例算法分割出的地物的完整性以及对地物边缘的定位精度。本实施例选取了一幅英国米尔顿凯恩斯镇的卫星遥感影像,如图8(a)所示,影像获取时间为2005年。影像所覆盖的区域是一处植物迷宫的一角,主要存在草地和道路两类地物。通过目视解译,本实施例对影像进行了人工分割,提取出了影像中的8条道路以及被道路分离的10块草地共计18块区域,参考分割结果如图8(b)所示。同时,本实施例选用了eCognition软件中的多尺度分割算法作为对比实验。
图9为实验一影像的分割结果。其中图9(a)和图9(b)为使用eCognition软件在不同尺度下的分割结果。图9(a)的参数设置为尺度,权重因子。图9 (b)的参数设置为尺度,权重因子。图9(c)为本实施例所提算法分割结果,参数设置为尺度,权重因子。图9(d)至图9(f)分别为eCognition软件和本实施例算法分割结果对应的错分割图。
从图9中可以看出,本实施例所提算法将影像中的8条道路及10块草地均分割成了独立完整的对象。对于eCognition软件的分割结果,当形状权重因子较大时,软件将影像中的道路及草地分段分割成了数个对象,未能分割出完整的地物,如图9(a)所示。当形状权重因子较小时,由于算法对于地物形状的约束减弱,导致分割结果中的道路对象在不同位置出现了一些“毛刺”,如图9(b)所示。而本实施例所提算法充分利用了遥感影像中地物的边缘特征,对地物边缘的定位更准确。
基于像素数量误差,本实施例通过错分割率和区域数比两项指标评价影像分割结果的精度。表2给出了不同算法对实验一影像的分割精度,从表中可以看出,eCognition软件在不同尺度下的分割结果区域数比均大于1,存在不同程度的过分割现象。当尺度时,eCognition软件分割结果的区域数比更接近1,但由于缺少了形状约束,导致对地物边缘的定位不准,其错分割率略高于尺度的结果。本实施例所提算法将具有同类地物的区域分割为独立完整的对象,区域数比等于1,且错分割率低于eCognition软件在不同尺度下的分割结果。
表1 实验一分割结果的精度评价
Figure PCTCN2020141206-appb-000020
第二组实验的目的是验证本实施例所提算法对影像中具有丰富纹理信息的区域的边界定位精度。
时频分析中的海森堡不确定性原理表达如下(李世雄,1997)。设窗口函数g(t)∈L 2(-∞,+∞),并且满足如下条件tg(t)∈L 2(-∞,+∞),
Figure PCTCN2020141206-appb-000021
Figure PCTCN2020141206-appb-000022
则有
Figure PCTCN2020141206-appb-000023
其中,
Figure PCTCN2020141206-appb-000024
称为窗口函数的宽度。
Figure PCTCN2020141206-appb-000025
称为窗口函数的中心。
根据不确定性原理,当对一幅影像进行时频变换并对不同的子带进行分析时,每个子带均无法满足在频率域和空间域的分辨率均达到最佳。即当频率域的分辨率越高时,相应的空间域的分辨率越低,反之亦然。因此,在使用基于时频分析的方法描述影像纹理特征时,往往无法精确定位纹理区域的边界。这就需要引入地物的其他特征,如边缘特征。
本实施例选用了武汉地区的资源三号卫星遥感影像进行实验。图10(a)为为Brodatz纹理的分割结果,图10(b)为本实施例所提基于边缘合并代价准则的面向对象分割算法的结果。图10(c)和图10(d)分别为对应的错分割图。
从图中可以看出,算法融入了边缘特征后,对纹理区域的边界定位精度有明显的改善,能够较准确地找到建筑区的上侧和下侧边界,并且将建筑区分成了一块独立完整的区域。表3给出了两种算法的分割精度。从表中可以看出,基于边缘合并代价准则的算法分割结果在错分割率上低于结合纹理特征的面向对象分割算法,并且其区域数比等于1。
表3 实验二分割结果的精度评价
Figure PCTCN2020141206-appb-000026
通过两组实验,本实施例证明了基于边缘合并代价准则的面向对象多尺度 分割算法的有效性。算法在利用遥感影像的光谱特征、形状特征和纹理特征的同时,顾及了影像的边缘特征,分割结果能较准确地定位地物的边界,其分割精度优于传统算法。
请参阅图11,为本申请实施例提供的一种面向对象的高分辨率遥感影像多尺度分割***,包括:遥感图像获取模块1、特征提取模块2、异质度图像计算模块3、边缘强度图生成模块4及分割特征图生成模块5;遥感图像获取模块1用于获取遥感图像;特征提取模块2用于提取遥感影像的光谱特征、形状特征、纹理特征及边缘特征;异质度图像计算模块3用于分别计算光谱特征、形状特征、纹理特征的异质度,并根据异质度分割图像,得到异质度图像;边缘强度图生成模块4用于使用边缘特征计算遥感影像的边缘强度,并生成边缘强度图;分割特征图生成模块5用于合并异质度图像及边缘强度图,得到遥感图像分割特征图。
在一个实施例中,分割特征图生成模块5包括:异质度增长值计算单元、总体异质度增长值计算单元、对象合并代价计算单元及合并单元;异质度增长值计算单元用于计算光谱异质度、形状异质度、纹理异质度的增长值;总体异质度增长值计算单元用于根据光谱异质度、形状异质度、纹理异质度的增长值计算总体异质度增长值;对象合并代价计算单元用于根据边缘强度及总体异质度增长值计算对象合并代价;合并单元用于在对象合并代价最小时,对异质度图像及边缘强度图进行合并。
在一个实施例中,异质度图像计算模块3包括:子带获取单元、纹理特征计算单元;子带获取单元用于对遥感图像使用非下采样轮廓波变换的方式进行分解,得到多个尺度和多个方向的子带;纹理特征计算单元用于使用局部纹理能量函数计算每个尺度和每个方向子带的局部能量,局部能量为遥感影像的纹理特征。
纹理特征计算公式为:
Figure PCTCN2020141206-appb-000027
Figure PCTCN2020141206-appb-000028
Figure PCTCN2020141206-appb-000029
其中,(2n+1)×(2n+1)为窗口大小,E s,d(x,y)为影像中坐标为(x,y)处的纹理特征E s,d,f s,d表示子带,s表示尺度,d表示方向。
在一个实施例中,异质度增长值计算单元包括:纹理区域获取子单元、纹理异质度获取子单元、异质度增长值计算子单元;纹理区域获取子单元用于获取遥感图像内不同纹理特征种类的纹理区域;纹理异质度获取子单元用于计算遥感影像在每个纹理特征种类的纹理异质度;异质度增长值计算子单元用于根据每个纹理异质度计算异质度增长值。
每个纹理特征种类的纹理异质度计算公式为
Figure PCTCN2020141206-appb-000030
s表示尺度,d表示方向,ω s,d表示第s尺度、d表方向上的纹理特征的权重因子,0≤ω s,d≤1,σ s,d表示影像对象在第s尺度、d方向上的纹理特征的标准差;
异质度增长值计算公式为:
Figure PCTCN2020141206-appb-000031
n obj1,n obj2和n Merge分别为对象Obj1,对象Obj2以及合并后对象的像素数,
Figure PCTCN2020141206-appb-000032
Figure PCTCN2020141206-appb-000033
分别为对象Obj1,对象Obj2以及合并后对象在第s尺度、d方向上的纹理特征的标准差。
在一个实施例中,总体异质度增长值计算单元包括:因子获取子单元及增 长值计算子单元;因子获取子单元用于获取光谱权重因子、形状权重因子、纹理权重因子;增长值计算子单元用于使用光谱权重因子、形状权重因子、纹理权重因子及光谱异质度、形状异质度、纹理异质度的增长值计算总体异质度增长值。
总体异质度增长值,计算公式为f=ω color.Δh colorshape.Δh shapetexture.Δh texture,ω color、ω shape和ω texture分别为光谱权重因子、形状权重因子和纹理权重因子,满足0≤ω color≤1,0≤ω shape≤1并且ω colorshapetexture=1,Δh color、Δh shape、Δh texture分别代表光谱异质度、形状异质度、纹理异质度的增长值。
在一个实施例中,对象合并代价计算单元包括:边缘合并代价计算子单元、合并子单元,边缘合并代价计算子单元用于根据边缘强度计算边缘合并代价,合并子单元用于使用边缘合并代价及异质度增长值计算对象合并代价。
边缘合并代价的计算公式为EdgeCost(Obj1,Obj2)=∑ (x,y)∈CommonEdgeIntensity(x,y),EdgeIntensity(x,y)表示影像在点(x,y)处的边缘强度,Common表示对象Obj1和对象Obj2中相互邻接的点的集合。
对象合并代价的计算公式为F=f+ω edge.EdgeCost(Obj1,Obj2),f为总体异质度增长值。
在一个实施例中,边缘强度图生成模块4包括:微分单元、卷积单元、边缘强度初步图生成单元及边缘细化单元;微分单元用于对遥感影像进行一阶微分或二阶微分,得到遥感影像的梯度;卷积单元用于对遥感影像进行高斯滤波的卷积,得到平滑影像;边缘强度初步图生成单元用于根据梯度及平滑影像计算遥感影像的边缘强度及法向量,得到边缘强度初步图;喜欢单元用于使用非极大值抑制的放大对边缘强度初步图进行细化处理,得到遥感影像的边缘强度图。
本申请实施例提供一种电子装置,请参阅图12,该电子装置包括:存储器601、处理器602及存储在存储器601上并可在处理器602上运行的计算机程序,处理器602执行该计算机程序时,实现前述中描述的面向对象的高分辨率遥感影像多尺度分割方法。
进一步的,该电子装置还包括:至少一个输入设备603以及至少一个输出设备604。上述存储器601、处理器602、输入设备603以及输出设备604,通过总线605连接。
其中,输入设备603具体可为摄像头、触控面板、物理按键或者鼠标等等。输出设备604具体可为显示屏。
存储器601可以是高速随机存取记忆体(RAM,Random Access Memory)存储器,也可为非不稳定的存储器(non-volatile memory),例如磁盘存储器。存储器601用于存储一组可执行程序代码,处理器602与存储器601耦合。
进一步的,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质可以是设置于上述各实施例中的电子装置中,该计算机可读存储介质可以是前述中的存储器601。该计算机可读存储介质上存储有计算机程序,该程序被处理器602执行时实现前述实施例中描述的面向对象的高分辨率遥感影像多尺度分割方法。
进一步的,该计算机可存储介质还可以是U盘、移动硬盘、只读存储器601(ROM,Read-Only Memory)、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上为对本发明所提供的一种面向对象的高分辨率遥感影像多尺度分割方法及***的描述,对于本领域的技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本 发明的限制。

Claims (10)

  1. 一种面向对象的高分辨率遥感影像多尺度分割方法,其特征在于,包括:
    获取遥感影像;
    提取所述遥感影像的光谱特征、形状特征、纹理特征及边缘特征;
    分别计算所述光谱特征、形状特征、纹理特征的异质度,并根据所述异质度分割图像,得到异质度图像;
    使用所述边缘特征计算遥感影像的边缘强度,并生成边缘强度图;
    合并所述异质度图像及边缘强度图,得到遥感图像分割特征图。
  2. 根据权利要求1所述的面向对象的高分辨率遥感影像多尺度分割方法,其特征在于,
    所述合并所述异质度图像及边缘强度图包括:
    计算光谱异质度、形状异质度、纹理异质度的增长值;
    根据所述光谱异质度、形状异质度、纹理异质度的增长值计算总体异质度增长值;
    根据所述边缘强度及所述总体异质度增长值计算对象合并代价;
    在所述对象合并代价最小时,对所述异质度图像及边缘强度图进行合并。
  3. 根据权利要求2所述的面向对象的高分辨率遥感影像多尺度分割方法,其特征在于,
    提取所述遥感影像的纹理特征包括:
    对所述遥感图像使用非下采样轮廓波变换的方式进行分解,得到多个尺度和多个方向的子带;
    使用局部纹理能量函数计算每个尺度和每个方向子带的局部能量,所述局部能量为所述遥感影像的纹理特征,纹理特征计算公式为:
    Figure PCTCN2020141206-appb-100001
    Figure PCTCN2020141206-appb-100002
    Figure PCTCN2020141206-appb-100003
    (2n+1)×(2n+1)为窗口大小,E s,d(x,y)为影像中坐标为(x,y)处的纹理特征E s,d,f s,d表示子带,s表示尺度,d表示方向。
  4. 根据权利要求3所述的面向对象的高分辨率遥感影像多尺度分割方法,其特征在于,
    计算纹理异质度的增长值包括:
    获取所述遥感图像内不同纹理特征种类的纹理区域;
    计算遥感影像在每个纹理特征种类的纹理异质度;
    根据每个纹理异质度计算异质度增长值;
    每个纹理特征种类的纹理异质度计算公式为
    Figure PCTCN2020141206-appb-100004
    s表示尺度,d表示方向,ω s,d表示第s尺度、d表方向上的纹理特征的权重因子,0≤ω s,d≤1,σ s,d表示影像对象在第s尺度、d方向上的纹理特征的标准差;
    异质度增长值计算公式为:
    Figure PCTCN2020141206-appb-100005
    n obj1,n obj2和n Merge分别为对象Obj1,对象Obj2以及合并后对象的像素数,
    Figure PCTCN2020141206-appb-100006
    Figure PCTCN2020141206-appb-100007
    分别为对象Obj1,对象Obj2以及合并后对象在第s尺度、d方向上的纹理特征的标准差。
  5. 根据权利要求4所述的面向对象的高分辨率遥感影像多尺度分割方法, 其特征在于,
    总体异质度增长值的计算方法包括:
    使用光谱权重因子、形状权重因子、纹理权重因子及光谱异质度、形状异质度、纹理异质度的增长值计算总体异质度增长值,计算公式为f=ω color.Δh colorshape.Δh shapetexture.Δh texture,ω color、ω shape和ω texture分别为光谱权重因子、形状权重因子和纹理权重因子,满足0≤ω color≤1,0≤ω shape≤1并且ω colorshapetexture=1,Δh color、Δh shape、Δh texture分别代表光谱异质度、形状异质度、纹理异质度的增长值。
  6. 根据权利要求5所述的面向对象的高分辨率遥感影像多尺度分割方法,其特征在于,
    所述对象合并代价的计算方法包括:
    根据所述边缘强度计算边缘合并代价,计算公式为EdgeCost(Obj1,Obj2)=∑ (x,y)∈CommonEdgeIntensity(x,y),EdgeIntensity(x,y)表示影像在点(x,y)处的边缘强度,Common表示对象Obj1和对象Obj2中相互邻接的点的集合;
    使用所述边缘合并代价及所述异质度增长值计算所述对象合并代价,计算公式为F=f+ω edge.EdgeCost(Obj1,Obj2),f为总体异质度增长值。
  7. 根据权利要求1所述的面向对象的高分辨率遥感影像多尺度分割方法,其特征在于,
    所述边缘强度图的生成方法包括:
    对遥感影像进行一阶微分或二阶微分,得到遥感影像的梯度;
    对遥感影像进行高斯滤波的卷积,得到平滑影像;
    根据所述梯度及所述平滑影像计算遥感影像的边缘强度及法向量,得到边缘强度初步图;
    使用非极大值抑制的放大对边缘强度初步图进行细化处理,得到遥感影像的边缘强度图。
  8. 一种面向对象的高分辨率遥感影像多尺度分割***,其特征在于,
    包括:
    遥感图像获取模块,用于获取遥感图像;
    特征提取模块,用于提取所述遥感影像的光谱特征、形状特征、纹理特征及边缘特征;
    异质度图像计算模块,用于分别计算所述光谱特征、形状特征、纹理特征的异质度,并根据所述异质度分割图像,得到异质度图像;
    边缘强度图生成模块,用于使用所述边缘特征计算遥感影像的边缘强度,并生成边缘强度图;
    分割特征图生成模块,用于合并所述异质度图像及边缘强度图,得到遥感图像分割特征图。
  9. 一种电子装置,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,实现权利要求1至7中的任意一项所述方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现权利要求1至7中的任意一项所述方法。
PCT/CN2020/141206 2020-12-30 2020-12-30 面向对象的高分辨率遥感影像多尺度分割方法及*** WO2022141145A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141206 WO2022141145A1 (zh) 2020-12-30 2020-12-30 面向对象的高分辨率遥感影像多尺度分割方法及***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141206 WO2022141145A1 (zh) 2020-12-30 2020-12-30 面向对象的高分辨率遥感影像多尺度分割方法及***

Publications (1)

Publication Number Publication Date
WO2022141145A1 true WO2022141145A1 (zh) 2022-07-07

Family

ID=82258784

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141206 WO2022141145A1 (zh) 2020-12-30 2020-12-30 面向对象的高分辨率遥感影像多尺度分割方法及***

Country Status (1)

Country Link
WO (1) WO2022141145A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909050A (zh) * 2022-10-26 2023-04-04 中国电子科技集团公司第五十四研究所 一种结合线段方向和形态学差分的遥感图像机场提取方法
CN116030352A (zh) * 2023-03-29 2023-04-28 山东锋士信息技术有限公司 融合多尺度分割和超像素分割的长时序土地利用分类方法
CN116052001A (zh) * 2023-02-10 2023-05-02 中国矿业大学(北京) 一种基于类别方差比值法进行最优尺度选择的方法
CN116188497A (zh) * 2023-04-27 2023-05-30 成都国星宇航科技股份有限公司 立体遥感影像对生成dsm优化方法、装置、设备及存储介质
CN117745688A (zh) * 2023-12-25 2024-03-22 中国科学院空天信息创新研究院 一种多尺度sar影像变化检测可视化***、电子设备及存储介质
CN117876711A (zh) * 2024-03-12 2024-04-12 金锐同创(北京)科技股份有限公司 基于图像处理的图像目标检测方法、装置、设备及介质
CN118097474A (zh) * 2024-04-22 2024-05-28 嘉兴明绘信息科技有限公司 一种基于图像分析的地物信息采集识别***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060050984A1 (en) * 2004-05-11 2006-03-09 National Aeronautics And Space Administration As Representing The United States Government Split-remerge method for eliminating processing window artifacts in recursive hierarchical segmentation
CN104751478A (zh) * 2015-04-20 2015-07-01 武汉大学 一种基于多特征融合的面向对象的建筑物变化检测方法
CN105894513A (zh) * 2016-04-01 2016-08-24 武汉大学 顾及影像对象时空变化的遥感影像变化检测方法及***
CN107085708A (zh) * 2017-04-20 2017-08-22 哈尔滨工业大学 基于多尺度分割和融合的高分辨率遥感图像变化检测方法
CN109993753A (zh) * 2019-03-15 2019-07-09 北京大学 遥感影像中城市功能区的分割方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060050984A1 (en) * 2004-05-11 2006-03-09 National Aeronautics And Space Administration As Representing The United States Government Split-remerge method for eliminating processing window artifacts in recursive hierarchical segmentation
CN104751478A (zh) * 2015-04-20 2015-07-01 武汉大学 一种基于多特征融合的面向对象的建筑物变化检测方法
CN105894513A (zh) * 2016-04-01 2016-08-24 武汉大学 顾及影像对象时空变化的遥感影像变化检测方法及***
CN107085708A (zh) * 2017-04-20 2017-08-22 哈尔滨工业大学 基于多尺度分割和融合的高分辨率遥感图像变化检测方法
CN109993753A (zh) * 2019-03-15 2019-07-09 北京大学 遥感影像中城市功能区的分割方法及装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHEN XIAOLE: "Object-oriented Building Extraction from High-resolution Remote Sensing Images Based on Visual Attention Mechanism", CHINESE DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, UNIVERSITY OF CHINESE ACADEMY OF SCIENCES, CN, no. 1, 15 January 2017 (2017-01-15), CN , XP055948541, ISSN: 1674-022X *
TIAN YINGJIE: "Change Detection of Buildings in Urban Area with High-resolution Remote Sensing Images Based on Multi-scale Object", CHINESE DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, UNIVERSITY OF CHINESE ACADEMY OF SCIENCES, CN, no. 6, 15 June 2020 (2020-06-15), CN , XP055948537, ISSN: 1674-022X *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909050A (zh) * 2022-10-26 2023-04-04 中国电子科技集团公司第五十四研究所 一种结合线段方向和形态学差分的遥感图像机场提取方法
CN115909050B (zh) * 2022-10-26 2023-06-23 中国电子科技集团公司第五十四研究所 一种结合线段方向和形态学差分的遥感图像机场提取方法
CN116052001A (zh) * 2023-02-10 2023-05-02 中国矿业大学(北京) 一种基于类别方差比值法进行最优尺度选择的方法
CN116052001B (zh) * 2023-02-10 2023-11-17 中国矿业大学(北京) 一种基于类别方差比值法进行最优尺度选择的方法
CN116030352A (zh) * 2023-03-29 2023-04-28 山东锋士信息技术有限公司 融合多尺度分割和超像素分割的长时序土地利用分类方法
CN116188497A (zh) * 2023-04-27 2023-05-30 成都国星宇航科技股份有限公司 立体遥感影像对生成dsm优化方法、装置、设备及存储介质
CN116188497B (zh) * 2023-04-27 2023-07-07 成都国星宇航科技股份有限公司 立体遥感影像对生成dsm优化方法、装置、设备及存储介质
CN117745688A (zh) * 2023-12-25 2024-03-22 中国科学院空天信息创新研究院 一种多尺度sar影像变化检测可视化***、电子设备及存储介质
CN117876711A (zh) * 2024-03-12 2024-04-12 金锐同创(北京)科技股份有限公司 基于图像处理的图像目标检测方法、装置、设备及介质
CN118097474A (zh) * 2024-04-22 2024-05-28 嘉兴明绘信息科技有限公司 一种基于图像分析的地物信息采集识别***

Similar Documents

Publication Publication Date Title
WO2022141145A1 (zh) 面向对象的高分辨率遥感影像多尺度分割方法及***
CN109409292B (zh) 基于精细化特征优化提取的异源图像匹配方法
Yin et al. Hot region selection based on selective search and modified fuzzy C-means in remote sensing images
Zhang et al. A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform
CN107067405B (zh) 基于尺度优选的遥感影像分割方法
JP2017521779A (ja) 画像解析を用いた核のエッジの検出
CN101520894A (zh) 基于区域显著性的显著对象提取方法
CN103279957A (zh) 一种基于多尺度特征融合的遥感图像感兴趣区域提取方法
CN113609984A (zh) 一种指针式仪表读数识别方法、装置及电子设备
CN114241372A (zh) 一种应用于扇扫拼接的目标识别方法
CN110070545A (zh) 一种城镇纹理特征密度自动提取城镇建成区的方法
CN115690086A (zh) 一种基于对象的高分辨率遥感影像变化检测方法及***
Fengping et al. Road extraction using modified dark channel prior and neighborhood FCM in foggy aerial images
Han et al. Segmenting images with complex textures by using hybrid algorithm
CN112164087B (zh) 基于边缘约束和分割边界搜索的超像素分割方法及装置
CN115620169B (zh) 基于区域一致性的建筑物主角度修正方法
Zhang et al. Region-of-interest extraction based on spectrum saliency analysis and coherence-enhancing diffusion model in remote sensing images
CN108304766B (zh) 一种基于高分辨率遥感筛选危险品堆场的方法
Yao et al. A multi-expose fusion image dehazing based on scene depth information
CN115861792A (zh) 一种加权相位定向描述的多模态遥感影像匹配方法
CN114862883A (zh) 一种目标边缘提取方法、图像分割方法及***
CN115511928A (zh) 多光谱图像的匹配方法
CN115170978A (zh) 车辆目标检测方法、装置、电子设备及存储介质
Deng et al. A coarse to fine framework for recognizing and locating multiple diatoms with highly complex backgrounds in forensic investigation
Liu et al. Remote sensing image fusion algorithm based on mutual-structure for joint filtering using saliency detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967492

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967492

Country of ref document: EP

Kind code of ref document: A1