CN113920420A - Building extraction method and device, terminal equipment and readable storage medium - Google Patents

Building extraction method and device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN113920420A
CN113920420A CN202010644505.7A CN202010644505A CN113920420A CN 113920420 A CN113920420 A CN 113920420A CN 202010644505 A CN202010644505 A CN 202010644505A CN 113920420 A CN113920420 A CN 113920420A
Authority
CN
China
Prior art keywords
image data
building
data
segmentation result
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010644505.7A
Other languages
Chinese (zh)
Inventor
史文中
陈善雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute HKPU
Original Assignee
Shenzhen Research Institute HKPU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute HKPU filed Critical Shenzhen Research Institute HKPU
Priority to CN202010644505.7A priority Critical patent/CN113920420A/en
Publication of CN113920420A publication Critical patent/CN113920420A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application is suitable for the technical field of image processing, and provides a building extraction method, which comprises the following steps: the method comprises the steps of obtaining target image data (including first data and high-resolution image data), preprocessing the target image data, carrying out image processing on the preprocessed target image data to obtain an initial candidate area and a self-adaptive segmentation result, carrying out fusion processing on the initial candidate area and the self-adaptive segmentation result to obtain a building candidate area, and finally optimizing the building candidate area according to the initial candidate area to obtain a building extraction result. According to the method, the first data and the high-resolution image data are subjected to fusion processing to obtain the identification result of the building, the elevation information relative to the ground is provided by combining the first data, the method is not easily influenced by environmental factors, and the high-resolution image data has the characteristic of providing abundant spectral features and texture information, so that the robustness of the method, the accuracy of the identification result and the stability of the high-accuracy identification result are improved.

Description

Building extraction method and device, terminal equipment and readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a building extraction method, a building extraction device, a terminal device and a readable storage medium.
Background
In the processes of mapping, city planning, city modeling and city disaster emergency response, how to identify and extract the city buildings in the image data is the most necessary problem to be solved.
The existing urban building extraction method mainly comprises the following steps: a building extraction method based on two-dimensional image data; the method comprises a building extraction method based on three-dimensional data and a data fusion building extraction method based on two-dimensional data and three-dimensional data.
The building extraction method based on the two-dimensional image data is realized by relying on the spectral characteristics of the high-resolution image data, is easily shielded by spectral ambiguity and shadow, is easy to cause serious errors of extraction results, and is low in automation level.
The building extraction method based on the three-dimensional data is used for extracting buildings based on the intensity, the echo and the geometric attributes of the three-dimensional point cloud data. The automation level of the method is higher than that of a building extraction method based on two-dimensional image data, but the accuracy of the extraction result is unstable.
For the above reasons, a data fusion building extraction method based on two-dimensional data and three-dimensional data has attracted much attention.
However, the existing data fusion building extraction method based on two-dimensional data and three-dimensional data is easily limited in many ways; the identification result of the building identification method based on the low-level or middle-level features depends on a specific threshold or some empirical rules, is easily influenced by the quality of sample data, and is low in accuracy and universality.
The method for extracting the building facing to the object needs to segment the image data, the segmentation process is easily affected by illumination, noise and other environmental factors, and the segmentation result depends on the setting of the segmentation parameters, so that the extraction result is unstable.
The deep learning method based on the high-level semantic features needs a large amount of well-labeled sample data to perform feature learning, lacks expansibility, and is limited by a training domain in generalization capability.
Disclosure of Invention
The embodiment of the application provides a building extraction method, a building extraction device, terminal equipment and a readable storage medium, and can solve the problems that an existing urban building extraction method is unstable in extraction result, depends on training sample data, is low in precision and the like.
In a first aspect, an embodiment of the present application provides a building extraction method, including:
acquiring target image data; wherein the target image data comprises first data and high-resolution image data;
preprocessing the target image data to obtain preprocessed target image data;
performing image processing on the preprocessed target image data to obtain an initial candidate region and a self-adaptive segmentation result;
performing fusion processing on the initial candidate region and the self-adaptive segmentation result to obtain a building candidate region;
and optimizing the building candidate area according to the initial candidate area to obtain a building extraction result.
In a second aspect, an embodiment of the present application provides a building extraction device, including:
the acquisition module is used for acquiring target image data; wherein the target image data comprises first data and high-resolution image data;
the preprocessing module is used for preprocessing the target image data to obtain preprocessed target image data;
the image processing module is used for carrying out image processing on the preprocessed target image data to obtain an initial candidate region and a self-adaptive segmentation result;
the fusion processing module is used for carrying out fusion processing on the initial candidate area and the self-adaptive segmentation result to obtain a building candidate area;
and the optimization module is used for optimizing the building candidate area according to the initial candidate area to obtain a building extraction result.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the building extraction method according to any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for building extraction according to any one of the first aspect is implemented.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the building extraction method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
According to the method and the device, the first data and the high-resolution image data are fused to obtain the identification result of the building, the first data are combined to provide the elevation information of the object higher than the ground relative to the ground, the object is not easily influenced by environmental factors, and the high-resolution image data can provide abundant spectral features and texture information, so that the robustness of the method and the accuracy of the identification result are improved, and meanwhile, the stability of the accuracy of the identification result is ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a building extraction method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of high resolution video data and first data provided in accordance with an embodiment of the present application;
FIG. 3 is a schematic diagram of an initial candidate region provided by an embodiment of the present application;
FIG. 4 is a schematic illustration of a super-contour map provided by an embodiment of the present application;
fig. 5 is a schematic view of an application scenario for obtaining a segmentation result of a first hyper-contour map according to an embodiment of the present application;
fig. 6 is a schematic view of an application scenario for determining a first target area according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an overlay of a high resolution image and a high resolution data segmentation result according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an overlay of a high resolution image and a first data segmentation result according to an embodiment of the present application;
FIG. 9 is a schematic diagram of obtaining a first super-contour map segmentation result and a second super-contour map segmentation result according to another embodiment of the present application;
fig. 10 is a schematic view of an application scenario for obtaining a building extraction result according to an embodiment of the present application;
fig. 11 is a schematic diagram of an accuracy evaluation result of a building extraction method provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a building extraction device provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The building extraction method provided by the embodiment of the application can be applied to terminal equipment such as a mobile phone, a tablet computer, a notebook computer and a server, and the embodiment of the application does not limit the specific type of the terminal equipment.
Fig. 1 shows a schematic flow chart of the building extraction method provided by the present application, which can be applied to the above-mentioned server by way of example and not limitation.
S101, acquiring target image data; wherein the target image data includes first data and high resolution image data.
In a specific application, target image data about a target site is acquired; wherein the target image data includes, but is not limited to, first data and high resolution image data; the target site includes, but is not limited to, a city or a site where building extraction is required. Wherein the first data refers to data of relative elevation; data for relative elevation includes, but is not limited to, LiDAR point cloud data, digital surface model DSM, stereopair data, and normalized digital surface model nDSM. The relative elevation data can provide elevation information of an object higher than the ground relative to the ground, and the high-resolution image data can provide rich spectral characteristics and texture information.
In this embodiment, the first data may specifically be LiDAR (light detection and ranging) point cloud data. Correspondingly, laser radar (LiDAR) point cloud data about a target site can be acquired through radar equipment; and shooting high-resolution image data I about the target place through a high-resolution camera.
As shown in fig. 2, a schematic diagram of high resolution image data and first data is provided;
fig. 2(a) is a grayscale image of an example of laser radar point cloud rendering, fig. 2(b) includes a grayscale image of a three-band (near infrared-red-green) high-resolution image, and fig. 2(c) is preprocessed first data;
specifically, fig. 2(b) includes historic buildings, roads, and trees of complex shapes, characterized by a dense building.
S102, preprocessing the target image data to obtain preprocessed target image data.
In a specific application, the target image data after being preprocessed is obtained by performing corresponding preprocessing according to the type of the target image data to remove (or reduce) environmental noise (such as noise, ambient light, and the like) in the target image data, and reduce the influence of the environmental noise on the accuracy of the target image data.
In one embodiment, step S102 includes:
performing first preprocessing on the first data to obtain preprocessed first data; wherein the first preprocessing comprises generating a normalized digital surface model (nDSM) based on the first data;
performing second preprocessing on the high-resolution image data to obtain preprocessed high-resolution image data; the second preprocessing includes at least one of a denoising process, a registration process, and an orthorectification process.
In specific application, the first data is subjected to first preprocessing, so that the accuracy of classifying the first data can be improved; by performing the second pre-processing on the high resolution image data, noise can be reduced or removed while reducing the influence of reflection and refraction of ambient light on the image data. Wherein the first preprocessing includes, but is not limited to, generating a normalized digital surface model based on the first data; the second pre-processing includes, but is not limited to, at least one of a denoising process, a registration process, and an orthorectification process.
S103, image processing is carried out on the preprocessed target image data, and an initial candidate region and a self-adaptive segmentation result are obtained.
In specific application, the preprocessed target image data is detected through a preset algorithm to obtain an information image, and a non-ground area map and a vegetation area map in the information image are superposed to obtain an area meeting a first preset condition as an initial candidate area; and respectively carrying out self-adaptive segmentation processing on the first super-contour map and the second super-contour map in the information image to obtain a corresponding first data segmentation result and a high-resolution image data segmentation result. The information image includes, but is not limited to, a first super-contour map, a non-ground area map, a second super-contour map, and a vegetation area map.
As shown in fig. 3, a schematic diagram of an initial candidate region is exemplarily shown;
fig. 3(a) shows a vegetation region map, fig. 3(b) shows a non-ground region map, and fig. 3(c) shows an initial candidate region map.
In one embodiment, step S103 includes:
s1031, detecting the preprocessed target image data to generate an information image; the information image comprises a first super contour map, a non-ground area map, a second super contour map and a vegetation area map;
s1032, overlapping the non-ground area map and the vegetation area map to obtain a first overlapped image;
s1033, acquiring a region which meets a first preset condition in the first superposed image, and taking the region as an initial candidate region;
s1034, performing self-adaptive segmentation on the first super-contour map and the second super-contour map to obtain corresponding self-adaptive segmentation results; wherein the adaptive segmentation result comprises a first data segmentation result and a high-resolution image data segmentation result.
In specific application, the preprocessed first data is detected and processed to obtain a first super-contour map nUCM and a non-ground region map (NGR) in an information image; and detecting the preprocessed high-resolution image data to obtain a second super-contour map iUCM and a vegetation region map (VR).
In specific application, the obtained non-ground area map and the vegetation area map are subjected to superposition processing to obtain a first superposed image, and an area meeting a first preset condition in the first superposed image is obtained and serves as an initial candidate area, wherein the first preset condition can be specifically set according to actual conditions and is used for identifying a non-ground and non-building pixel area in the first superposed image.
In practical application, the superimposed area in the first superimposed image is vegetation in the non-ground area, and the first preset condition may be correspondingly set to remove a pixel area in the non-ground area corresponding to the superimposition in the first superimposed image. Therefore, the superimposed Region in the first superimposed image may be identified, the pixel Region corresponding to the superimposed Region may be deleted from the non-ground Region, and the other pixel regions in the non-ground Region may be retained as Initial Candidate Regions (ICRs).
Since the non-ground area is an area obtained by binarization, the result of binarization generally includes: the gray value of the object area which is larger than the conversion threshold is converted into a first gray value, and the pixel gray value of the background area which is smaller than the conversion threshold is converted into a second gray value.
Therefore, deleting the pixel region corresponding to the overlap region from the non-ground region may be understood as converting the grayscale value of the pixel region corresponding to the overlap region in the non-ground region into a second grayscale value as the background region in the non-ground region.
In this embodiment, the first gray scale value 255 (i.e. white) and the second gray scale value 0 (i.e. black) are set, that is, the setting of the binarization processing includes: the gray scale value of the object region larger than the conversion threshold is converted to 255 (i.e., white), and the gray scale value of the pixel of the background region smaller than the conversion threshold is converted to 0 (i.e., black).
And respectively carrying out self-adaptive segmentation on the first super-contour map and the second super-contour map to obtain corresponding self-adaptive segmentation results. In the process of dividing the first super-contour map or the second super-contour map, if the set threshold is too large, an under-division phenomenon occurs to make an object in the division result large, and if the set threshold is too small, an over-division phenomenon occurs to generate an over-division result. Therefore, it is necessary to set a plurality of corresponding segmentation thresholds according to actual conditions, perform hierarchical segmentation on a hyper Contour Map (UCM), and generate a multi-level adaptive segmentation result correspondingly.
In one embodiment, adaptively segmenting the first hyper-contour map and the second hyper-contour map to obtain corresponding adaptive segmentation results, includes:
according to at least 2 preset segmentation thresholds, carrying out multilevel self-adaptive segmentation on the first super-contour map and the second super-contour map to obtain a corresponding first data segmentation result and a high-resolution image data segmentation result; wherein the first data segmentation result comprises at least two first segmentation results; the high resolution image data segmentation result comprises at least two second segmentation results.
In a specific application, in order to reduce the influence of the over-segmentation result or the under-segmentation result on the precision of the building extraction result, at least 2 segmentation threshold values are preset, and multi-level self-adaptive segmentation is performed on the first super-contour map or the second super-contour map according to the multiple segmentation threshold values, so that a first data segmentation result comprising at least two first segmentation results and a high-resolution image data segmentation result comprising at least two second segmentation results are correspondingly obtained.
For example, if four segmentation thresholds are set, the first super-contour map or the second super-contour map is subjected to multi-level adaptive segmentation according to 4 segmentation thresholds, the correspondingly obtained first data segmentation result includes 4 first segmentation results, and the high resolution image data segmentation result includes 4 second segmentation results.
For example, a plurality of corresponding segmentation thresholds may be set at preset intervals, and the first super-contour map may be subjected to multi-level adaptive segmentation. For example, the preset interval is set to 0.1, the corresponding segmentation thresholds are set to include 0.1, 0.2, 0.3 and 0.4, the first hyper-contour map is subjected to 4-level adaptive segmentation according to the 4 segmentation thresholds, and the adaptive segmentation result corresponding to the generated first data includes nSeg1,nSeg2,nSeg3,nSeg4I.e. the process from over-segmentation to under-segmentation for the first hyper-contour map; wherein, nSeg4Is a first division corresponding to a division threshold of 0.4As a result, nSeg3For the first segmentation result, nSeg, corresponding to a segmentation threshold of 0.32For the first segmentation result, nSeg, corresponding to a segmentation threshold of 0.21Is a first segmentation result corresponding to a segmentation threshold of 0.1.
It should be noted that the value range of the super-contour map is 0 to 1. Whereas in a hyper-contour map the greater the value of a contour, the greater the contrast across the values representing that contour. Since the building is an object on the ground, the building has a large contrast with the ground in the relative elevation data, and thus the building has a large contour value. In the high-resolution image data, it is generally considered that a building is brighter in the high-resolution image data, and the building has a larger contrast with an adjacent ground object, and thus the building has a larger contour value. That is, the building appears to have a larger value of outline on both super-outline maps. Meanwhile, it can be obtained based on experiments, and the extraction results based on the segmentation threshold value of 0.4 or more are basically consistent, so that the maximum segmentation threshold value can be set to be 0.4. The value of the weakest edge in the high-resolution image data in the super-contour map is about 0.05, and in order to avoid omission, 0.1 is set as the lower limit of a segmentation threshold; therefore, a preset interval of 0.1 may be set correspondingly, and 4 division thresholds may be set. Experiments prove that by setting the segmentation threshold value to be from 0.4 to 0.1, the hierarchy superposition analysis is realized, and a satisfactory building extraction result can be basically obtained.
As shown in FIG. 4, a schematic of a super-contour map is provided;
FIG. 4A is a second super-contour diagram iUCM of the high resolution image data; fig. 4B shows a first super-contour map nbcm of the first data.
And S104, carrying out fusion processing on the initial candidate region and the self-adaptive segmentation result to obtain a building candidate region.
In specific application, the initial candidate region and the self-adaptive segmentation result are subjected to data fusion-based hierarchical superposition analysis processing, so that a building candidate region can be obtained.
In one embodiment, step S104 includes:
performing hierarchical overlay analysis on the first data segmentation result and the initial candidate region to obtain a second overlay image; wherein the first data segmentation result comprises at least two first segmentation results;
identifying all first target areas in the initial candidate areas according to the second superposed image;
merging all the first target areas to serve as a first super-contour map segmentation result;
performing hierarchical overlay analysis on the high-resolution image data segmentation result and the initial candidate region to obtain a third overlay image; wherein the high resolution image data segmentation result comprises at least two second segmentation results;
identifying all second target areas in the initial candidate areas according to the third superposed image;
merging all the second target areas to serve as a second super-contour map segmentation result;
and fusing the first super-contour map segmentation result and the second super-contour map segmentation result to obtain the building candidate area.
In specific application, after the first hyper-contour map is subjected to self-adaptive segmentation according to a preset segmentation threshold, at least two first segmentation results are obtained, wherein the at least two first segmentation results comprise segmentation results from under-segmentation processing to over-segmentation processing of first data; and after the second super-contour map is subjected to self-adaptive segmentation according to a preset segmentation threshold value, obtaining at least two second segmentation results including segmentation results from under-segmentation processing to over-segmentation processing of the high-resolution image data.
In specific application, a second superposed image can be obtained by superposing and analyzing the first data segmentation result and the initial candidate region, all first target regions in the initial candidate region are identified according to the second superposed image, and all the first target regions are merged to obtain a first super-contour map segmentation result;
in specific application, the segmentation result of the high-resolution image data and the initial candidate region are subjected to superposition analysis to obtain a third superposed image, all second target regions in the initial candidate region are identified according to the third superposed image, and all second target regions are merged to obtain a second super-contour map segmentation result.
Taking any pixel connected region in the initial candidate region as a candidate object, and taking any pixel connected region in the first data segmentation result as a first segmentation object; taking a superposed region superposed by the candidate object and the corresponding first segmentation object in the second superposed image as a first superposed object, wherein the first target region is a pixel region corresponding to the first superposed object meeting a second preset condition in the initial candidate region; the second preset condition can be specifically set according to the actual situation.
In this embodiment, the second preset condition is that the area ratio corresponding to the first superimposition object is greater than a preset area ratio threshold. Correspondingly, the area ratio refers to a ratio of an area of the first superimposition object in the second superimposition image to an area of the first division object corresponding to the first superimposition object.
And taking any pixel connected region in the high-resolution image data segmentation result as a high-resolution image data segmentation object, taking a superimposed region in which the candidate object in the third superimposed image and the corresponding high-resolution image data segmentation object are superimposed as a second superimposed object, wherein the second target region is a pixel region corresponding to the second superimposed object which meets a second preset condition in the initial candidate region.
In this embodiment, the second preset condition is that the area ratio of the second superimposition object is greater than a preset area ratio threshold. Correspondingly, the area ratio is a ratio of the area of the second superimposition object in the third superimposition region to the area of the high-resolution image data segmentation object corresponding to the second superimposition object.
The preset area proportion threshold value can be specifically set according to actual conditions. For example, the preset area ratio threshold is set to 80%. Correspondingly, when the area ratio of the first superimposition object to the first segmentation object corresponding to the first superimposition object in the second superimposition image is 85%, it is determined that the first superimposition object satisfies the second preset condition, that is, the pixel region corresponding to the first superimposition object in the initial candidate region is the first target region.
In a specific application, obtaining a first super-contour map segmentation result includes:
sorting the first segmentation results according to a preset sorting method; the sorting method comprises the steps of sorting the segmentation threshold values in sequence from large to small;
a101, overlapping a first segmentation result meeting a third preset condition in a sequence with an initial candidate region to obtain a second overlapped image;
step A102, calculating the area of each first superposition object in the second superposition image;
step A103, calculating the area of each first segmentation object in the first segmentation result;
step A104, calculating the area ratio of the first superposed object in each second superposed image and the first segmentation object corresponding to the first superposed object, and comparing the area ratio with a preset area ratio threshold value;
step A105, when the area ratio corresponding to the first superposition object is greater than a preset area ratio threshold, judging that the first superposition object meets a second preset condition, namely that a pixel region corresponding to the first superposition object in the initial candidate region is a first target region;
step A106, deleting the first target area from the initial candidate area to be used as a new initial candidate area;
and superposing the new initial candidate region and a second segmentation result meeting a fourth preset condition in the sequence to obtain a new second superposition result, returning to execute the steps A101 to A106 until the area ratio of each first superposition object in the second superposition image and the first segmentation object corresponding to the first superposition object is smaller than a preset area ratio threshold value, and combining all first target regions to serve as a first super-contour map segmentation result.
It is understood that the manner of obtaining the segmentation result of the second super-contour map may refer to the manner of obtaining the segmentation result of the first super-contour map, and is not described herein again.
In specific application, after a first super-contour map segmentation result and a second super-contour map segmentation result are obtained, the first super-contour map segmentation result and the second super-contour map segmentation result are fused to obtain a building candidate area. It is understood that the area of the pixel region can be represented by the number of pixels of the pixel region.
The third preset condition and the fourth preset condition can be specifically set according to a sorting method; for example, if the corresponding first segmentation results are sorted according to the sequence of the segmentation thresholds from large to small, the third preset condition is the first segmentation result located at the first position in the sequence; it is to be understood that, when the first data segmentation result includes more than 3 first segmentation results, the fifth preset condition is the first segmentation result located at the third position in the sequence, and the sixth preset condition is the first segmentation result located at the fourth position in the sequence; and so on, and will not be described in detail herein.
Taking preset 4 segmentation thresholds as an example, as shown in fig. 5, an application scenario diagram for obtaining a first super-contour map segmentation result is provided.
In FIG. 5a, a1-a4 denotes the initial candidate region, b1-b4 denotes the second overlay image, c1-c4 denotes the first target region, nSeg1,nSeg2,nSeg3,nSeg4Respectively representing 4 first segmentation results;
wherein, the preset 4 segmentation thresholds comprise 0.1, 0.2, 0.3 and 0.4. Wherein, nSeg4For the first segmentation result, nSeg, corresponding to a segmentation threshold of 0.43For the first segmentation result, nSeg, corresponding to a segmentation threshold of 0.32For the first segmentation result, nSeg, corresponding to a segmentation threshold of 0.21Is a first segmentation result corresponding to a segmentation threshold of 0.1;
initial candidate region a1 and nSeg4Performing superposition to obtain a second superposed image b1, recognizing that a first superposed object meeting a second preset condition in the second superposed image b1 is c1 according to b1, and judging that the first superposed object c1 is a first target area; deletion from initial candidate area a1Dividing the first overlapped object c1 to obtain a new initial candidate region a 2;
initial candidate region a2 and nSeg3Performing superposition to obtain a new second superposed image b2, recognizing a first superposed object meeting a second preset condition in the second superposed image b2 as c2 according to b2, and judging the first superposed object c2 as a first target area; deleting the first overlay object c2 from the initial candidate region a2 to obtain a new initial candidate region a 3;
initial candidate region a3 and nSeg2Performing superposition to obtain a new second superposed image b3, recognizing a first superposed object meeting a second preset condition in the second superposed image b3 as c3 according to b3, and judging the first superposed object c3 as a first target area; deleting the first overlay object c3 from the initial candidate region a3 to obtain a new initial candidate region a 4;
initial candidate region a4 and nSeg1And performing superposition to obtain a new second superposed image b4, recognizing that the first superposed object meeting the second preset condition in the second superposed image b4 is c4 according to b4, and judging that the first superposed object c4 is the first target area.
By merging the first target regions c1-c4, a first super-silhouette segmentation result is obtained as shown in fig. 5 b.
Taking the first segmentation result as an example, as shown in fig. 6, an application scenario diagram for determining the first target area is provided.
Fig. 6a is a schematic diagram of an initial candidate region, H1 and H2 represent two candidates, fig. 6b is a schematic diagram of a first segmentation result, F1-F2 represent two first segmentation objects, fig. 6c is a second overlay image processed after overlay analysis of fig. 6a and 6b, and J1-J3 represent three first overlay objects;
it can be understood that, when determining whether the first overlapping object J1 is the first target region, it is necessary to first calculate the area of the first overlapping object J1, then calculate the ratio of the area of the first overlapping object J1 to the area of the first divisional object F1, compare the area ratio with a preset area ratio threshold, and determine that the first overlapping object J1 is the first target region if the area ratio is greater than the preset area ratio threshold; if the area ratio is less than or equal to the preset area ratio threshold, determining that the first overlapping object J1 is not the first target area;
similarly, when determining whether the first overlapping object J2 is the first target region, it is necessary to first calculate the area of the first overlapping object J2, and then calculate the ratio of the area of the first overlapping object J2 to the area of the first segmentation object F1, where the area ratio is greater than the preset area ratio threshold, and then determine that the first overlapping object J2 is the first target region; if the area ratio is less than or equal to the preset area ratio threshold, determining that the first overlapping object J2 is not the first target area;
similarly, when determining whether the first overlapping object J3 is the first target region, it is necessary to first calculate the area of the first overlapping object J3 and then calculate the ratio of the area of the first overlapping object J3 to the area of the first divided object F2; comparing the area ratio with a preset area ratio threshold, and if the area ratio is greater than the preset area ratio threshold, determining that the first overlapping object J3 is a first target area; if the area ratio is less than or equal to the preset area ratio threshold, it is determined that the first overlapping object J3 is not the first target area.
It should be noted that, since the non-ground area map and the vegetation area map are images after binarization processing, and the initial candidate area is a processing result map obtained by superimposing the non-ground area map and the vegetation area map, the initial candidate area also includes an object area converted into a first gray scale value and a background area converted into a second gray scale value. It is understood that the first target area and/or the second target area are deleted from the initial candidate area, that is, the first target area and/or the second target area in the initial candidate area are converted into the second gray scale value, so that the first target area and/or the second target area become the background area in the initial candidate area.
As shown in fig. 7, a schematic diagram of the superposition of the high resolution image and the high resolution image segmentation result is provided;
fig. 7A shows an image in which the boundary of the second segmentation result map generated with the threshold value of 0.4 is superimposed on the high-resolution video; fig. 7B is an image in which the boundary of the second segmentation result map generated with the threshold value of 0.1 is superimposed on the high-resolution video.
As shown in fig. 8, a schematic diagram of the superposition of the high resolution image and the first data segmentation result is provided;
in fig. 8, a denotes an image in which the boundary of the first segmentation result map generated with 0.4 as a threshold is superimposed on the high-resolution video; b denotes an image in which the boundary of the first segmentation result map generated with 0.1 as a threshold is superimposed on the high-resolution video.
FIG. 9 is a schematic diagram illustrating a first super-silhouette segmentation result and a second super-silhouette segmentation result;
in FIG. 9, (a) shows the first segmentation result nSeg4A second superimposed image which is superimposed with the initial candidate region; (b) indicates the first segmentation result nSeg3A second superimposed image which is superimposed with the initial candidate region; (c) indicates the first segmentation result nSeg2A second superimposed image which is superimposed with the initial candidate region; (d) indicates the first segmentation result nSeg1A second superimposed image which is superimposed with the initial candidate region; (e) a schematic diagram showing the result of the segmentation of the first super-contour map obtained by the combination of the second superposed images;
(f) indicates the second segmentation result iSeg4A third superimposed image which is superimposed with the initial candidate region; (g) indicates the second segmentation result iSeg3A third superimposed image which is superimposed with the initial candidate region; (h) indicates the second segmentation result iSeg2A third superimposed image which is superimposed with the initial candidate region; (i) indicates the second segmentation result iSeg1A third superimposed image which is superimposed with the initial candidate region; (j) and a schematic diagram showing the second super-contour graph segmentation result obtained by combining the third superposed images.
In the high-resolution image data, the spectral change corresponding to the pixel region of the building itself is weak, and the spectral change corresponding to other objects in the building at the periphery of the building is high; in the first data, a certain height difference exists between the non-ground pixel points (or non-ground pixel regions) and the ground pixel points (or ground pixel regions), so that the building has a higher UCM value in the first super-contour map and the second super-contour map. Correspondingly, segmented objects that may belong to a building generated from the UCM typically account for a large proportion of the initial candidate region.
It should be noted that the non-architectural background includes two parts: vegetation and other non-ground background areas (e.g., signposts, statues, etc.) with lower luminance values. The vegetation with lower brightness value comprises shadow vegetation or sparse vegetation. The vegetation with lower brightness values is generally lower in height than buildings and smaller in area than buildings, and the interior of the vegetation area with lower brightness values is irregular in height. Correspondingly, the height difference of the vegetation area with lower brightness value, the contrast of the spectrum and the contrast of the elevation are lower than those of the building. It will be appreciated by those skilled in the art that other non-ground background areas are typically smaller in area than buildings, and that other non-ground backgrounds do not have similar features to buildings. Therefore, the non-ground background has a smaller UCM value in the first super-contour map and the high resolution remote sensing image. Further, since the proportion of the non-ground background in the superimposed image of any one of the segmentation results and the initial candidate region is low, the proportion of the segmentation target of the non-building background in the initial candidate region is small.
And S105, optimizing the building candidate area according to the initial candidate area to obtain a building extraction result.
In one embodiment, step S105 includes:
superposing the initial candidate area and the building candidate area to obtain a fourth superposed image;
and optimizing the building candidate area according to the fourth superposed image to obtain the building extraction result.
In a specific application, because the short plants of the adjacent buildings are close to the buildings, after the initial candidate area and the adaptive segmentation result are subjected to simple fusion processing, the obtained building candidate area may include the short vegetation areas of the adjacent buildings, and the short vegetation areas brought by mistake are not included in the initial candidate area. Therefore, further processing of the building candidate area is required: performing superposition processing on the initial candidate region and the candidate building region to obtain a fourth superposed image; deleting the superimposed area in the fourth superimposed image from the candidate building area can eliminate a low vegetation area adjacent to the building in the complex scene. And then removing pixel areas with the areas smaller than a preset threshold value in the candidate building areas through morphological operation, performing morphological corrosion and expansion processing on the candidate building areas, and optimizing boundaries of building extraction results in the candidate building areas to obtain final building extraction results. The preset threshold value can be specifically set according to actual conditions, for example, in practical applications, the area of a building is generally not less than 10 square meters, and correspondingly, the preset threshold value can be set to be 10 square meters.
It should be noted that the candidate building area is a result obtained by performing a hierarchical superimposition analysis process based on data fusion on the initial candidate area and the adaptive segmentation result, and therefore, the candidate building area includes an object area converted into a first grayscale value and a background area converted into a second grayscale value. It is to be understood that deleting the superimposed area in the fourth superimposed image from the candidate building area includes: in the candidate building area, the pixel grayscale value of the superimposition area in the fourth superimposition image is converted into a second grayscale value so that the superimposition area becomes a background pixel area in the candidate building area.
Fig. 10 is a schematic view of an application scenario for obtaining a building extraction result according to an embodiment of the present invention.
In fig. 10, fig. 10(a) shows a grayscale map of high-resolution video data; FIG. 10(b) shows the true values; FIG. 10(c) shows an initial candidate region map; FIG. 10(d) is a diagram showing candidate areas of a building; fig. 10(e) shows a grayscale overlay of a building candidate area, in which a darker area indicates a portion extracted in common by two data sources, and a lighter area indicates a portion extracted by a single data source.
In one embodiment, step S1031 includes:
detecting the preprocessed first data to generate the first super-contour map;
carrying out binarization processing on the preprocessed first data to obtain the non-ground area map;
detecting the preprocessed high-resolution image data to generate a second super-contour map;
and carrying out binarization processing on the preprocessed high-resolution image data to obtain the vegetation area map.
In specific application, the preprocessed first data can be detected through a first preset algorithm to obtain a first super contour map nUCM; the first preset algorithm includes, but is not limited to, a global probabilistic edge (gPb) based contour detection algorithm;
the preprocessed high-resolution image data can be detected through a first preset algorithm to obtain a second super-contour map iUCM;
the first and second superprofiles nUCM and iUCM have a value range of 0 to 1.
Obtaining a first binary conversion threshold value through an automatic threshold value method according to the characteristic that the first data has height information, and carrying out binary processing on the preprocessed first data according to the first conversion threshold value to generate a non-ground area NGR;
according to the characteristic that the high-resolution image data I has rich spectral information and has detail texture characteristics, a binary second conversion threshold value is obtained through an automatic threshold value method, the preprocessed high-resolution image data is subjected to binary processing according to the second conversion threshold value, and a vegetation area VR is extracted from the binary processed high-resolution image according to a normalized vegetation index (NDVI).
The normalized vegetation index NDVI is a quotient of a difference between a reflection value of a near-infrared band and a reflection value of a red-light band and a sum of the two in a remote sensing image, and can be obtained by the following formula:
Figure BDA0002572651050000181
wherein NIR represents a near infrared band, and R represents a red band.
The value range of the normalized vegetation index NDVI is as follows: -1< ═ NDVI < ═ 1, when NDVI is negative, it means that the ground coverage of the target site is cloud, water, snow, etc., and the above-mentioned objects are highly reflective to visible light; when NDVI is 0, all objects such as rocks or bare soil and the like in the target field are represented, and NIR and R are approximately equal; when NDVI is positive, it indicates all vegetation coverage of the target field, and the value of NDVI increases with increasing coverage.
Fig. 11 exemplarily provides a schematic diagram of an accuracy evaluation result of a building extraction method.
Fig. 11(a) shows an evaluation range diagram, fig. 11(b) shows a true value diagram, and fig. 11(c) shows an accuracy evaluation result diagram. White represents positive detected values and gray represents false and false detected values.
Table 1 exemplarily shows quantitative analysis indexes of nineteen automatic building detection extraction techniques.
The second row to the nineteenth row of data in table 1 are precision evaluation results of other automatic building detection extraction methods published on the isps website; the last row of data in table 1 represents the result of the precision evaluation of the present building extraction method. The indexes for evaluating the accuracy of the building extraction method include: 1. recall ("the number of pixels correctly identified as a building" is a proportion of "the number of pixels containing a building"); 2. precision ratio ("the number of correctly identified building pixels" is a proportion of "the number of pixels identified as buildings" all); and 3. quality (integration of recall and precision).
According to the data in the table 1, the building extraction method has high extraction result precision and high quality.
Figure BDA0002572651050000191
Figure BDA0002572651050000201
TABLE 1
According to the method, the first data and the high-resolution image data are fused to obtain the identification result of the building, the first data can be combined to provide the elevation information of the object higher than the ground relative to the ground, the object is not easily influenced by environmental factors, and the high-resolution image data can provide abundant spectral features and texture information, so that the robustness of the method and the accuracy of the identification result are improved, and meanwhile, the stability of the accuracy of the identification result is ensured.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 12 shows a block diagram of the architecture of the building extraction device 100 provided in the embodiment of the present application, corresponding to the building extraction method described in the above embodiment, and only the relevant parts to the embodiment of the present application are shown for convenience of description.
Referring to fig. 12, the building extraction apparatus 100 includes:
an obtaining module 101, configured to obtain target image data; wherein the target image data comprises first data and high-resolution image data;
a preprocessing module 102, configured to preprocess the target image data to obtain preprocessed target image data;
an image processing module 103, configured to perform image processing on the preprocessed target image data to obtain an initial candidate region and a self-adaptive segmentation result;
a fusion processing module 104, configured to perform fusion processing on the initial candidate region and the adaptive segmentation result to obtain a building candidate region;
and the optimization module 105 is configured to optimize the building candidate region according to the initial candidate region to obtain a building extraction result.
According to the method, the first data and the high-resolution image data are fused to obtain the identification result of the building, the first data can be combined to provide the elevation information of the object higher than the ground relative to the ground, the object is not easily influenced by environmental factors, and the high-resolution image data can provide abundant spectral features and texture information, so that the robustness of the method and the accuracy of the identification result are improved, and meanwhile, the stability of the accuracy of the identification result is ensured.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 13, the terminal device 13 of this embodiment includes: at least one processor 130 (only one shown in fig. 13), a memory 131, and a computer program 132 stored in the memory 131 and executable on the at least one processor 130, the processor 130 implementing the steps in any of the various building extraction method embodiments described above when executing the computer program 132.
The terminal device 13 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 130, a memory 131. Those skilled in the art will appreciate that fig. 13 is only an example of the terminal device 13, and does not constitute a limitation to the terminal device 13, and may include more or less components than those shown, or combine some components, or different components, such as an input/output device, a network access device, and the like.
The Processor 130 may be a Central Processing Unit (CPU), and the Processor 130 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 131 may in some embodiments be an internal storage unit of the terminal device 13, such as a hard disk or a memory of the terminal device 13. The memory 131 may also be an external storage device of the terminal device 13 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital Card (SD), a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 13. Further, the memory 131 may also include both an internal storage unit and an external storage device of the terminal device 13. The memory 131 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 131 may also be used to temporarily store data that has been output or is to be output.
An embodiment of the present application further provides a terminal device, where the terminal device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A building extraction method, comprising:
acquiring target image data; wherein the target image data comprises first data and high-resolution image data;
preprocessing the target image data to obtain preprocessed target image data;
performing image processing on the preprocessed target image data to obtain an initial candidate region and a self-adaptive segmentation result;
performing fusion processing on the initial candidate region and the self-adaptive segmentation result to obtain a building candidate region;
and optimizing the building candidate area according to the initial candidate area to obtain a building extraction result.
2. The building extraction method of claim 1, wherein the image processing of the preprocessed target image data to obtain an initial candidate region and an adaptive segmentation result comprises:
detecting the preprocessed target image data to generate an information image; the information image comprises a first super contour map, a non-ground area map, a second super contour map and a vegetation area map;
overlapping the non-ground area map and the vegetation area map to obtain a first overlapped image;
acquiring a region which meets a first preset condition in the first superposed image, and taking the region as an initial candidate region;
performing adaptive segmentation on the first super-contour map and the second super-contour map to obtain corresponding adaptive segmentation results; wherein the adaptive segmentation result comprises a first data segmentation result and a high-resolution image data segmentation result.
3. The building extraction method according to claim 2, wherein the detecting the preprocessed target image data to generate an information image includes:
detecting the preprocessed first data to generate the first super-contour map;
carrying out binarization processing on the preprocessed first data to obtain the non-ground area map;
detecting the preprocessed high-resolution image data to generate a second super-contour map;
and carrying out binarization processing on the preprocessed high-resolution image data to obtain the vegetation area map.
4. The building extraction method according to claim 2, wherein the performing a fusion process on the initial candidate region and the adaptive segmentation result to obtain a building candidate region comprises:
performing hierarchical overlay analysis on the first data segmentation result and the initial candidate region to obtain a second overlay image; wherein the first data segmentation result comprises at least two first segmentation results;
identifying all first target areas in the initial candidate areas according to the second superposed image;
merging all the first target areas to serve as a first super-contour map segmentation result;
performing hierarchical overlay analysis on the high-resolution image data segmentation result and the initial candidate region to obtain a third overlay image; wherein the high resolution image data segmentation result comprises at least two second segmentation results;
identifying all second target areas in the initial candidate areas according to the third superposed image;
merging all the second target areas to serve as a second super-contour map segmentation result;
and fusing the first super-contour map segmentation result and the second super-contour map segmentation result to obtain the building candidate area.
5. The building extraction method of claim 1, wherein the optimizing the building candidate area according to the initial candidate area to obtain a building extraction result comprises:
superposing the initial candidate area and the building candidate area to obtain a fourth superposed image;
and optimizing the building candidate area according to the fourth superposed image to obtain the building extraction result.
6. The building extraction method according to any one of claims 1 to 5, wherein the preprocessing the target image data to obtain preprocessed target image data includes:
performing first preprocessing on the first data to obtain preprocessed first data; wherein the first pre-processing comprises generating a normalized digital surface model based on the first data;
performing second preprocessing on the high-resolution image data to obtain preprocessed high-resolution image data; the second preprocessing includes at least one of a denoising process, a registration process, and an orthorectification process.
7. A building extraction apparatus, comprising:
the acquisition module is used for acquiring target image data; wherein the target image data comprises first data and high-resolution image data;
the preprocessing module is used for preprocessing the target image data to obtain preprocessed target image data;
the image processing module is used for carrying out image processing on the preprocessed target image data to obtain an initial candidate region and a self-adaptive segmentation result;
the fusion processing module is used for carrying out fusion processing on the initial candidate area and the self-adaptive segmentation result to obtain a building candidate area;
and the optimization module is used for optimizing the building candidate area according to the initial candidate area to obtain a building extraction result.
8. The building extraction apparatus as claimed in claim 7, wherein said image processing the preprocessed target image data to obtain the initial candidate region and the adaptive segmentation result comprises:
the detection unit is used for detecting the preprocessed target image data to generate an information image; the information image comprises a first super contour map, a non-ground area map, a second super contour map and a vegetation area map;
the superposition processing unit is used for carrying out superposition processing on the non-ground area map and the vegetation area map to obtain a first superposition image;
an obtaining unit, configured to obtain a region that satisfies a first preset condition in the first overlay image, as an initial candidate region;
the self-adaptive segmentation unit is used for carrying out self-adaptive segmentation on the first super-contour map and the second super-contour map to obtain corresponding self-adaptive segmentation results; wherein the adaptive segmentation result comprises a first data segmentation result and a high-resolution image data segmentation result.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
CN202010644505.7A 2020-07-07 2020-07-07 Building extraction method and device, terminal equipment and readable storage medium Pending CN113920420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010644505.7A CN113920420A (en) 2020-07-07 2020-07-07 Building extraction method and device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010644505.7A CN113920420A (en) 2020-07-07 2020-07-07 Building extraction method and device, terminal equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113920420A true CN113920420A (en) 2022-01-11

Family

ID=79231482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010644505.7A Pending CN113920420A (en) 2020-07-07 2020-07-07 Building extraction method and device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113920420A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581709A (en) * 2022-03-02 2022-06-03 深圳硅基智能科技有限公司 Model training, method, apparatus, and medium for recognizing target in medical image
CN115061150A (en) * 2022-04-14 2022-09-16 昆明理工大学 Building extraction method based on laser radar point cloud data pseudo-waveform feature processing
CN115287089A (en) * 2022-09-02 2022-11-04 香港理工大学 Method for preparing aromatic monomer from lignin
CN116805351A (en) * 2023-06-14 2023-09-26 壹品慧数字科技(上海)有限公司 Intelligent building management system and method based on Internet of things
CN117095299A (en) * 2023-10-18 2023-11-21 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581709A (en) * 2022-03-02 2022-06-03 深圳硅基智能科技有限公司 Model training, method, apparatus, and medium for recognizing target in medical image
CN115061150A (en) * 2022-04-14 2022-09-16 昆明理工大学 Building extraction method based on laser radar point cloud data pseudo-waveform feature processing
CN115287089A (en) * 2022-09-02 2022-11-04 香港理工大学 Method for preparing aromatic monomer from lignin
CN115287089B (en) * 2022-09-02 2023-08-25 香港理工大学 Method for preparing aromatic monomer from lignin
CN116805351A (en) * 2023-06-14 2023-09-26 壹品慧数字科技(上海)有限公司 Intelligent building management system and method based on Internet of things
CN117095299A (en) * 2023-10-18 2023-11-21 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area
CN117095299B (en) * 2023-10-18 2024-01-26 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area

Similar Documents

Publication Publication Date Title
CN113920420A (en) Building extraction method and device, terminal equipment and readable storage medium
US20220245936A1 (en) Object-based change detection using a neural network
US20230154181A1 (en) Systems and methods for analyzing remote sensing imagery
Zhang et al. Change detection between multimodal remote sensing data using Siamese CNN
Poullis et al. Delineation and geometric modeling of road networks
Lari et al. An adaptive approach for the segmentation and extraction of planar and linear/cylindrical features from laser scanning data
CN110378297B (en) Remote sensing image target detection method and device based on deep learning and storage medium
CN116051822A (en) Concave obstacle recognition method and device, processor and electronic equipment
Khandare et al. A survey paper on image segmentation with thresholding
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
Forlani et al. Adaptive filtering of aerial laser scanning data
Carrilho et al. Extraction of building roof planes with stratified random sample consensus
CN117036457A (en) Roof area measuring method, device, equipment and storage medium
CN114998387A (en) Object distance monitoring method and device, electronic equipment and storage medium
Shahin et al. SVA-SSD: saliency visual attention single shot detector for building detection in low contrast high-resolution satellite images
Liu et al. Identification of Damaged Building Regions from High-Resolution Images Using Superpixel-Based Gradient and Autocorrelation Analysis
Armenakis et al. Image processing and GIS tools for feature and change extraction
Patel et al. Road Network Extraction Methods from Remote Sensing Images: A Review Paper.
Volkov et al. Objects description and extraction by the use of straight line segments in digital images
Bouteldja et al. Retrieval of high resolution satellite images using texture features
Sun et al. Contextual models for automatic building extraction in high resolution remote sensing image using object-based boosting method
Jitkajornwanich et al. Road map extraction from satellite imagery using connected component analysis and landscape metrics
Wang et al. Integrated method for road extraction: deep convolutional neural network based on shape features and images
Su et al. Demolished building detection from aerial imagery using deep learning
Ankayarkanni et al. Object based segmentation techniques for classification of satellite image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination