CN114581786B - Method and device for estimating building area according to ground image - Google Patents

Method and device for estimating building area according to ground image Download PDF

Info

Publication number
CN114581786B
CN114581786B CN202111619169.1A CN202111619169A CN114581786B CN 114581786 B CN114581786 B CN 114581786B CN 202111619169 A CN202111619169 A CN 202111619169A CN 114581786 B CN114581786 B CN 114581786B
Authority
CN
China
Prior art keywords
building
area
information
module
ground image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111619169.1A
Other languages
Chinese (zh)
Other versions
CN114581786A (en
Inventor
刘宁
杨淑港
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen City Industry Development Group Co ltd
Original Assignee
Shenzhen City Industry Development Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen City Industry Development Group Co ltd filed Critical Shenzhen City Industry Development Group Co ltd
Priority to CN202111619169.1A priority Critical patent/CN114581786B/en
Publication of CN114581786A publication Critical patent/CN114581786A/en
Application granted granted Critical
Publication of CN114581786B publication Critical patent/CN114581786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for estimating a building area according to a ground image. The method comprises the steps of appointing a sample building by a user in a ground image with a certain proportion requirement, finding out all areas with color characteristics from the ground image after calculating the color characteristics of the sample building, judging whether the areas are buildings according to the shapes and the sizes of the areas, then constructing building outlines according to the areas to serve as the occupied land shapes of the buildings, and finally counting the building area of each building. The method is simple to implement, convenient and quick to estimate, the estimation error can be limited within a range of 10%, and the method is particularly suitable for estimating the building area in a specific area because the building structures in the specific area have higher similarity.

Description

Method and device for estimating building area according to ground image
Technical Field
The invention relates to building area calculation and relocation cost accounting.
Background
In the process of urbanization, the removal and the reconstruction of old houses are often involved. For developers, the cost estimate of the removal of old houses is very important. And the computation of the relocation cost is built on the building area. Therefore, the accuracy of the estimation of the removed building area is very important for the pre-bid developer to account for the cost. Particularly for a building company specialized in removal work, the assessment of the area of the removed building is a crucial factor for bid price establishment and is a key for profitability of the company.
The most accurate method for counting the area of the removed building is to call the house property data of the designated removed area by the house administration office for counting. However, there are several problems with this: the first problem is that it is difficult for a developer or a construction company to obtain the right to call house property data; the second problem is that the data in the bureau of housing and management is archived according to the user, not the building itself, so it needs to be arranged, which is quite troublesome, and it is easy to cause the error caused by missing data; the third problem is that the house property data of the designated area to be removed is not all listed in the house administration office due to possible illegal buildings. Thus, this method is time consuming, labor intensive, and inaccurate, and may not even be as straightforward as a field survey. However, during actual project bidding, there is typically no time to conduct a field survey. In addition, the cost of conducting the field survey itself is also a factor.
Based on the above factors, building area estimation using satellite photos or aerial photos in combination with artificial intelligence to identify buildings is a good method in the assumption. However, this method has some problems: the first problem is that the satellite images or aerial images are shot under different weather conditions and shooting time, so that the illumination difference is large, the difference between the images is large, and the building identification accuracy of artificial intelligence is not high; the second problem is that the difference between images is large due to different building types, so that the identification accuracy of the artificial intelligence to the buildings is not high; for example, roofs in some regions are covered by tiles, and roofs in some regions are covered by cement platforms; the third problem is that the building identification in the image relates to multi-target identification, so that the artificial intelligence modeling complexity is high; a fourth problem is that the artificial intelligence training itself requires a significant amount of effort.
Disclosure of Invention
The problems to be solved by the invention are as follows: and estimating the building area during the removal to account for the removal cost.
In order to solve the problems, the invention adopts the following scheme:
according to the invention, the method for estimating the building area according to the ground image comprises the following steps:
step S1: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not more than 1 meter;
step S2: presenting the ground image on a screen and then waiting for user input;
and step S3: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
and step S4: extracting average color information and color difference information within a range determined by a building outline of the sample building within the ground image;
step S5: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
step S6: repeating the steps S4 to S6 until the user input is finished;
step S7: counting the building area = K sigma (s _ i h _ i) according to the collected building information; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the number of floors determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
Further, according to the method for estimating a building area according to a ground image of the present invention, in step S7, S _ i = sa _ i + sb _ i ×; wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the ith building information, and c is a coefficient set in advance.
Further, according to the method for estimating the building area according to the ground image of the present invention, in the step S5, the region is represented by a set of region edge points; the step S5 also comprises the step of judging whether the area is a building; the step of judging whether the area is a building comprises the following steps:
step S51: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point, and a lower left boundary point;
step S52: then respectively constructing four contour edge lines according to the four boundary points;
step S53: whether the region edge point is located on the contour edge line or not is judged according to the distance from the region edge point to the contour edge line;
step S54: counting the proportion of the region edge points on the contour edge line in the region edge point set;
step S55: and judging whether the area is a building according to whether the proportion of the edge points of the area on the outline side line exceeds a first threshold value.
Further, according to the method for estimating a building area based on a ground image of the present invention, the step of determining whether the area is a building further comprises the steps of:
step S56: calculating the area according to the four boundary points;
step S57: calculating the area of the sample building determined according to the building outline;
step S58: and judging whether the area ratio obtained in the step S56 and the step S57 exceeds a second threshold value or not, and judging whether the area is a building or not.
Further, according to the method for estimating a building area according to a ground image of the present invention, the step S4 further includes receiving corrected color difference information input by a user; the step S5 further includes receiving the modified building information input by the user.
The device for estimating the building area according to the ground image comprises the following modules:
a module M1 for: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not more than 1 meter;
a module M2 for: presenting the ground image on a screen and then waiting for user input;
a module M3 for: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
a module M4 for: extracting average color information and color difference information within a range determined by the building outline of the sample building within the ground image;
a module M5 for: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
a module M6 for: repeatedly calling the modules M4 to M6 until the user input is finished;
a module M7 for: counting the building area = K sigma (s _ i h _ i) according to the collected building information; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the number of floors determined by the building type in the ith building information; and K is the ground proportion corresponding to each pixel.
Further, according to the apparatus for estimating a building area according to a ground image of the present invention, in the module M7, s _ i = sa _ i + sb _ i; wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the i-th building information.
Further, according to the apparatus for estimating a building area according to a ground image of the present invention, in the module M5, the area is represented by a set of area edge points; the module M5 also comprises a module for judging whether the area is a building; the module for judging whether the area is a building comprises the following modules:
a module M51 for: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
a module M52 for: then respectively constructing four contour edge lines according to the four boundary points;
a module M53 for: judging whether the area edge point is positioned on the contour edge line or not according to the distance from the area edge point to the contour edge line;
a module M54 for: counting the proportion of the region edge points on the contour edge line in the region edge point set;
a module M55 for: and judging whether the area is a building according to whether the proportion of the edge points of the area on the outline side line exceeds a first threshold value.
Further, according to the apparatus for estimating a building area according to a ground image of the present invention, the module for determining whether the area is a building further includes the following modules:
a module M56 for: calculating the area according to the four boundary points;
a module M57, configured to: calculating the area of the sample building determined according to the building outline;
a module M58 for: and judging whether the area ratio obtained by the module M56 and the module M57 exceeds a second threshold value or not, and judging whether the area is a building or not.
Further, according to the device for estimating the building area according to the ground image of the present invention, the module M4 further comprises receiving the corrected color difference information input by the user; the module M5 also comprises means for receiving the modified building information entered by the user.
The invention has the following technical effects: the method of the invention has simple implementation and convenient and rapid estimation, the estimation error can be limited within a range of 10 percent, and the method is particularly suitable for the estimation of the building area in a specific area because the building structures in the specific area have larger similarity.
Drawings
Fig. 1,2 and 3 are exemplary explanatory diagrams of the process of the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The method for estimating the building area according to the ground image is a method realized by executing a program instruction set through a machine, and is a method for estimating the building area by combining UI interaction and machine calculation. The method comprises the following steps:
step S1: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not less than 1 meter;
step S2: presenting the ground image on a screen and then waiting for user input;
and step S3: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
and step S4: extracting average color information and color difference information within a range determined by the building outline of the sample building in the ground image;
step S5: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
step S6: repeating the steps S4 to S6 until the user input is finished;
step S7: and (3) counting the building area according to the collected building information: s = K sigma (S _ i h _ i).
In the step S7, sigma represents accumulation, S _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
Step S1 represents the input to the present invention is a ground image, which is typically obtained by satellite photography or by aerial photography. The ground image required to be input by the invention needs to have the requirement on resolution, and specifically, the ground size corresponding to each pixel is not more than 1 meter. The smaller the ground size corresponding to each pixel, the higher the resolution of the ground image. The size of each pixel is about 0.265 mm at a resolution of typically 96 screen DPI. The ground size corresponding to each pixel is not less than 1 meter, i.e. the image is displayed on a scale of about 1 cm to 38 meters under a screen with a DPI of 96. Currently, the display scale of GooleEarth software or other satellite map software is generally 1 cm to 20 m at most. That is, the ground image input by the present invention can be obtained from the high-resolution satellite map of these software in practical applications. The floor image of a particular village shown in the example of fig. 1 is from the gooeleearth software, and is shown on a scale of 1 cm to 20 m, which corresponds to about 0.52 m floor size for each pixel. "the ground size corresponding to each pixel of the ground image is not less than 1 meter" means that if the ground size corresponding to each pixel of the ground image is greater than 1 meter, the error of the method according to the invention is too large.
In addition, the ground image obtained in step S1 generally needs to be subjected to denoising processing. That is, the ground image required to be input by the present invention is preferably a ground image subjected to a denoising process. Of course, those skilled in the art will understand that the denoising process may be performed in step S1 after the ground image is obtained. The denoising process of the ground image can generally adopt gaussian filtering. Gaussian filtering is well known to those skilled in the art and will not be described in detail herein.
Steps S2 to S6 are a process of aggregating UI interactions. Step S2 is to display the ground image on a screen of a display so as to be able to interact with a user. Step S3 specifically receives a user' S designation of a building on the ground image. For example, in the example of fig. 1, the area designated by the quadrangle 11 is a building designated by the user. In the present invention, the building designated by the user is referred to as a sample building. In the example of fig. 1, the quadrilateral 11 represents the outline of a building that is a sample building. The building outline represents the floor area shape of a building, the floor area of the corresponding building can be calculated according to the area of a range defined by the building outline and the ratio, and the building area = floor area × floor height of the building can be calculated by combining the floor height of the building.
Obviously, the height of a certain building cannot be determined from the ground image, and the number of floors of the building cannot be determined. Although there is some inspiring effect on the sunlight shadow of the building, if the building is densely arranged, the sunlight shadow is not complete, and accordingly, the layer height cannot be determined. But the types of buildings and the heights of the building floors are not very many for buildings in a certain designated area. For example, in the village illustrated in fig. 1, there are generally two types of buildings: the first is the older building and the second is the more recent building. Older building story height is mostly single-storey building, and more recent building story height is mostly three-storey or four-storey building. The statistics of the building types and the building layer heights can be roughly known through field investigation, and the statistics can be accurately known by combining the sunlight shadow length of the buildings on the ground images.
Therefore, in the sample building specified by the user, the user can determine the building type and further specify the building type in step S3. The floor height of the building can thus be determined in a final step S7 depending on the building type. The correspondence relation between the building type and the building story height may be configured and input by the user in advance. Of course, those skilled in the art will appreciate that the building type may also be indicated by a floor height number, e.g., 1,2,3, 4.
Steps S4 and S5 are a machine learning analysis process. The artificial intelligence method is to construct model characteristic data used for characterizing features through training and then carry out analysis and judgment based on the model characteristic data. In the method of the present invention, the model feature data used for characterizing features are average color information and color difference information within the contour range, that is, step S4 is to construct model feature data of a sample building according to the sample building. And S5, finding out similar areas from the ground image according to the color-equalizing information and the color difference information which are used as model characteristic data, and judging whether the similar areas are buildings or not. The colors may be represented by three colors of RGB, or may be represented by three dimensions of hue, saturation, and lightness. In this embodiment, colors are represented three-dimensionally by hue, saturation, and lightness. That is, the average color information is calculated by average hue, average saturation and average lightness; the color difference information is calculated by color difference, saturation difference and brightness difference. The hue difference, saturation difference, and lightness difference here may be a variance, particularly a standard variance, or a maximum value difference.
For example, in fig. 1, the quadrangle 11 is the outline of the building entered by the user and received through step S3. The average color information and the color difference information of the ground image within the outline of the building represented by the quadrangle 11 can thus be calculated by step S4. Step S5 is to find out all the areas in the ground image within the range determined by the average color information and the color difference information. The range determined by the average color information and the color difference information is the color range. For example, in the example of fig. 1, the average color information in the building outline range represented by the quadrangle 11 is: the average hue was 60, the average saturation was 15%, and the average lightness was 92%; the color difference information is: the hue difference was 5, the saturation difference was 3%, and the lightness difference was 4%. The color range thus determined from the average color information and the color difference information is: the hue range is 55-65, the saturation range is 12-18%, and the lightness range is 88-96%. This makes it possible to find regions in the ground image which lie within this color range. For example, the buildings enclosed by circle 11 in the example of FIG. 1 conform to the color range described above, and the resulting area is illustrated in FIG. 2. Therefore, building outlines are constructed according to the areas illustrated in fig. 2, and corresponding building information can be constructed by combining the building types of the sample buildings. The region boundary illustrated in fig. 2 has an irregular shape, and is a set of region edge points made up of points, which are referred to as region edge points or region boundary points.
Obviously, the building information constructed through steps S4 and S5 is based on the sample building of step S3. Thus, all the image information in the ground image cannot be found from one building sample, and therefore, the user is required to continue to specify the sample building, thereby repeating steps S3, S4, and S5 to find all the building information in the ground image. When the user input is finished, the user does not continuously designate a sample building, and all buildings in the ground image are marked, so that the area = K sigma (s _ i h _ i) of the building can be summarized and counted; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel. The building information used for statistics of S _ i here comes from the sample building specified by the user in step S3 and the buildings found by the machines in steps S4 and S5.
In addition, in order to avoid interference of other ground images on the ground image, the present embodiment introduces a judgment whether the area is a building when building information is constructed in step S5, and constructs corresponding building information only when the area is a building. Specifically, the area is represented by a set of area edge points, and it is then determined whether the area is a building by the area shape and the area size.
The method for judging whether the area is a building or not according to the area shape comprises the following steps:
step S51: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
step S52: then respectively constructing four contour edge lines according to the four boundary points;
step S53: judging whether the edge points of the area are positioned on the contour edge line;
step S54: counting the proportion of the regional edge points on the outline border in the regional edge point set;
step S55: and judging whether the area is a building or not according to the proportion of the edge points of the area on the outline border.
Taking the area illustrated in fig. 2 as an example, the area illustrated in fig. 2 can find four boundary points: an upper left boundary point TL, an upper right boundary point TR, a lower right boundary point BL, and a lower left boundary point BR. Then in step S53, according to the four boundary points: four contour edge lines can be constructed by the upper left boundary point TL, the upper right boundary point TR, the lower right boundary point BL and the lower left boundary point BR, and are respectively: a line connecting the upper left boundary point TL and the upper right boundary point TR, a line connecting the upper left boundary point TL and the lower right boundary point BL, a line connecting the upper right boundary point TR and the lower left boundary point BR, and a line connecting the lower right boundary point BL and the lower left boundary point BR refer to the outline boundary line 30 in the example of fig. 3.
In step S54, "determine whether the edge point of the area is located on the contour line" adopts a distance calculation method: and calculating the distances from the edge points of the region to the four contour edge lines respectively, selecting the minimum distance, and finally judging whether the edge points of the region are on the contour edge lines according to whether the minimum distance is smaller than a distance threshold. The distance threshold here is preferably 2 to 3 pixels in this embodiment. For example, in the region illustrated in fig. 2, the edge points of the region marked by the circle 21 and the circle 22 are not located on the contour edge line, and the edge points of the other region are located on the contour edge line.
In step S55, "whether the area is a building is determined according to the proportion of the area edge point on the contour edge line" adopts a threshold determination method, that is, the proportion of the area edge point on the contour edge line is first calculated, and then it is determined whether the area is a building according to whether the proportion exceeds a first threshold. The first threshold value is preferably 70% in this embodiment. Specifically, if the ratio exceeds 70%, the area is determined to be a building, otherwise the area is not a building.
The method for judging whether the area is a building or not according to the area size comprises the following steps:
step S56: calculating the area according to the four boundary points;
step S57: calculating the area of the sample building determined according to the building outline;
step S58: and judging whether the area is a building or not according to the area ratio obtained in the step S56 and the step S57.
The four boundary points in step S56 are also the four boundary points found in step S51. The area calculated in step S56 is the area of the building found by the machine in steps S4 and S5, and step S57 is the area of the sample building specified by the user in step S3. The area in steps S56 and S57 is, in this embodiment, the area of the building in the ground image, specifically, the number of pixels defined by the outline or the boundary, rather than the floor area of the actual building. Of course, those skilled in the art will understand that the areas in steps S56 and S57 may be compared to the actual floor area.
In step S58, "whether or not the area is a building is determined based on the area ratio obtained in step S56 and step S57" adopts a threshold determination method, that is, the area ratio obtained in step S56 and step S57 is first calculated, and then it is determined whether or not the area is a building based on whether or not the ratio exceeds a second threshold. The second threshold value here is a single interval, and the present embodiment preferably has an interval of 0.5 to 2. Specifically, if the ratio is between 0.5 and 2, the area is a building, otherwise the area is not a building.
From the above, the present embodiment is a building area estimation method implemented by combining UI interaction and machine calculation. For convenience of UI interaction on one hand and considering that the building can be occluded by other obstacles on the other hand, the present embodiment takes a polygon as the building outline in step S3 and takes a quadrangle with four boundary points as vertices in step S5. After the machine constructs a quadrangle as a building outline through step S5, the user can edit the building outline of the quadrangle. For example, the user may modify the building by adding vertices so that the building outline may be pentagonal or hexagonal or even more; or the user may adjust the manner in which the positions of the vertices of the polygon are changed. In addition, the user can adjust the building type to modify. That is, in this embodiment, step S5 further includes a step of receiving the corrected building information input by the user.
In addition, this embodiment is used as a UI interaction system, and in step S4, after the machine calculates the average color information and the color difference information within the range determined by the building outline of the sample building, the user can also correct the color difference information. That is, step S4 includes receiving the corrected color difference information input by the user.
In addition, in consideration of error, in the present embodiment, the number of pixels S _ i defined by the building outline in the ith building information in step S7 is calculated by using the following formula: s _ i = sa _ i + sb _ i × c. Wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the ith building information, and c is a coefficient set in advance. In this embodiment, as a UI interactive system, the preset coefficient c can be edited by a user, and is generally 0.5 to 2.0, and the initialization is set to 1.0.

Claims (10)

1. A method for estimating a building area based on a ground image, comprising the steps of:
step S1: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not more than 1 meter;
step S2: presenting the ground image on a screen and then waiting for user input;
and step S3: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
and step S4: extracting average color information and color difference information within a range determined by a building outline of the sample building within the ground image;
step S5: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
step S6: repeating the steps S4 to S6 until the user input is finished;
step S7: counting the building area = K sigma (s _ i h _ i) according to the collected building information; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
2. The method according to claim 1, wherein in step S7, S _ i = sa _ i + sb _ i × c; wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the ith building information, and c is a coefficient set in advance.
3. The method according to claim 1, wherein in step S5, the area is represented by a set of area edge points; the step S5 also comprises the step of judging whether the area is a building; the step of judging whether the area is a building comprises the following steps:
step S51: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
step S52: then respectively constructing four contour edge lines according to the four boundary points;
step S53: judging whether the area edge point is positioned on the contour edge line or not according to the distance from the area edge point to the contour edge line;
step S54: counting the proportion of the region edge points on the contour edge line in the region edge point set;
step S55: and judging whether the area is a building or not according to whether the proportion of the edge points of the area on the outline side line exceeds a first threshold value or not.
4. The method of claim 3, wherein the step of determining whether the area is a building further comprises the steps of:
step S56: calculating the area according to the four boundary points;
step S57: calculating the area of the sample building determined according to the building outline;
step S58: and judging whether the area ratio obtained in the step S56 and the step S57 exceeds a second threshold value or not, and judging whether the area is a building or not.
5. The method according to claim 1, wherein the step S4 further comprises receiving the corrected color difference information inputted by the user; the step S5 further includes receiving the modified building information input by the user.
6. An apparatus for estimating a building area based on a ground image, comprising:
a module M1 for: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not more than 1 meter;
a module M2 for: presenting the ground image on a screen and then waiting for user input;
a module M3 for: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
a module M4 for: extracting average color information and color difference information within a range determined by the building outline of the sample building within the ground image;
a module M5 for: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
a module M6 for: repeatedly calling the modules M4 to M6 until the user input is finished;
a module M7 for: counting the building area = K sigma (s _ i h _ i) according to the collected building information; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
7. The apparatus for estimating a building area according to a ground image as claimed in claim 6, wherein in the module M7, s _ i = sa _ i + sb _ i; wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the ith building information, and c is a coefficient set in advance.
8. The apparatus for estimating a building area according to a ground image as claimed in claim 6, wherein in the module M5, the region is represented by a set of region edge points; the module M5 also comprises a module for judging whether the area is a building; the module for judging whether the area is a building comprises the following modules:
a module M51 configured to: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
a module M52 for: then respectively constructing four contour edge lines according to the four boundary points;
module M53: judging whether the area edge point is positioned on the contour edge line or not according to the distance from the area edge point to the contour edge line;
a module M54 for: counting the proportion of the region edge points on the contour edge line in the region edge point set;
a module M55 for: and judging whether the area is a building or not according to whether the proportion of the edge points of the area on the outline side line exceeds a first threshold value or not.
9. The apparatus for estimating an area of a building according to a ground image as claimed in claim 8, wherein the module for determining whether the area is a building further comprises the modules for:
a module M56 for: calculating the area according to the four boundary points;
a module M57 configured to: calculating the area of the sample building determined according to the building outline;
a module M58 for: and judging whether the area ratio obtained by the module M56 and the module M57 exceeds a second threshold value or not, and judging whether the area is a building or not.
10. The apparatus for estimating a building area according to a ground image as claimed in claim 6, wherein the module M4 further comprises receiving the corrected color difference information inputted by the user; the module M5 also comprises means for receiving the modified building information entered by the user.
CN202111619169.1A 2021-12-28 2021-12-28 Method and device for estimating building area according to ground image Active CN114581786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111619169.1A CN114581786B (en) 2021-12-28 2021-12-28 Method and device for estimating building area according to ground image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111619169.1A CN114581786B (en) 2021-12-28 2021-12-28 Method and device for estimating building area according to ground image

Publications (2)

Publication Number Publication Date
CN114581786A CN114581786A (en) 2022-06-03
CN114581786B true CN114581786B (en) 2022-11-25

Family

ID=81769575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111619169.1A Active CN114581786B (en) 2021-12-28 2021-12-28 Method and device for estimating building area according to ground image

Country Status (1)

Country Link
CN (1) CN114581786B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013116793A2 (en) * 2012-02-03 2013-08-08 Eagle View Technologies, Inc Systems and methods for estimation of building floor area
CN103699900A (en) * 2014-01-03 2014-04-02 西北工业大学 Automatic batch extraction method for horizontal vector contour of building in satellite image
CN103837537A (en) * 2014-03-17 2014-06-04 武汉大学 Sand-free metal inkjet printing plate lattice point area rate measurement method
CN104463868A (en) * 2014-12-05 2015-03-25 北京师范大学 Rapid building height obtaining method based on parameter-free high-resolution image
CN105283884A (en) * 2013-03-13 2016-01-27 柯法克斯公司 Classifying objects in digital images captured using mobile devices
CN108052876A (en) * 2017-11-28 2018-05-18 广东数相智能科技有限公司 Regional development appraisal procedure and device based on image identification
CN108228651A (en) * 2016-12-21 2018-06-29 青岛祥智电子技术有限公司 A kind of GIS-Geographic Information System
CN109034073A (en) * 2018-07-30 2018-12-18 深圳大学 Predict method, system, equipment and the medium of building demolition waste yield
US10339646B1 (en) * 2019-01-14 2019-07-02 Sourcewater, Inc. Image processing of aerial imagery for energy infrastructure analysis using pre-processing image selection
CN110570521A (en) * 2019-09-10 2019-12-13 同济大学 urban ground roughness calculation method
JP2020016923A (en) * 2018-07-23 2020-01-30 株式会社パスコ House transfer estimation apparatus and program
CN113128819A (en) * 2020-01-16 2021-07-16 江苏和网源电气有限公司 Distributed photovoltaic system software design method based on Google map API
CN113516135A (en) * 2021-06-23 2021-10-19 江苏师范大学 Remote sensing image building extraction and contour optimization method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11879732B2 (en) * 2019-04-05 2024-01-23 Ikegps Group Limited Methods of measuring structures

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013116793A2 (en) * 2012-02-03 2013-08-08 Eagle View Technologies, Inc Systems and methods for estimation of building floor area
CN105283884A (en) * 2013-03-13 2016-01-27 柯法克斯公司 Classifying objects in digital images captured using mobile devices
CN103699900A (en) * 2014-01-03 2014-04-02 西北工业大学 Automatic batch extraction method for horizontal vector contour of building in satellite image
CN103837537A (en) * 2014-03-17 2014-06-04 武汉大学 Sand-free metal inkjet printing plate lattice point area rate measurement method
CN104463868A (en) * 2014-12-05 2015-03-25 北京师范大学 Rapid building height obtaining method based on parameter-free high-resolution image
CN108228651A (en) * 2016-12-21 2018-06-29 青岛祥智电子技术有限公司 A kind of GIS-Geographic Information System
CN108052876A (en) * 2017-11-28 2018-05-18 广东数相智能科技有限公司 Regional development appraisal procedure and device based on image identification
JP2020016923A (en) * 2018-07-23 2020-01-30 株式会社パスコ House transfer estimation apparatus and program
CN109034073A (en) * 2018-07-30 2018-12-18 深圳大学 Predict method, system, equipment and the medium of building demolition waste yield
US10339646B1 (en) * 2019-01-14 2019-07-02 Sourcewater, Inc. Image processing of aerial imagery for energy infrastructure analysis using pre-processing image selection
CN110570521A (en) * 2019-09-10 2019-12-13 同济大学 urban ground roughness calculation method
CN113128819A (en) * 2020-01-16 2021-07-16 江苏和网源电气有限公司 Distributed photovoltaic system software design method based on Google map API
CN113516135A (en) * 2021-06-23 2021-10-19 江苏师范大学 Remote sensing image building extraction and contour optimization method based on deep learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Building Area Estimation in Drone Aerial Images Based on Mask R-CNN";Jun Chen 等;《IEEE Geoscience and Remote Sensing Letters》;20200101;第PP卷(第99期);1-4 *
"TERRESTRIAL LASER SCANNING AND SATELLITE DATA IN CULTURAL HERITAGE BUILDING DOCUMENTATION";Karagianni A. 等;《The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences》;20210828;第XLVI-M-1-2021卷;361-366 *
"地理信息***建筑物三维重建技术研究与应用";李印鑫;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》;20210415(第(2021)04期);47-48 *
"基于电子地图的建筑物大小估算方法研究";刘博;《中国优秀硕士学位论文全文数据库 (基础科学辑)》;20180215(第(2018)02期);A008-190 *
"基于高分辨率遥感影像的人工地物提取研究";刘浩;《中国优秀硕士学位论文全文数据库 (基础科学辑)》;20210615(第(2021)06期);A008-156 *

Also Published As

Publication number Publication date
CN114581786A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
US9942535B2 (en) Method for 3D scene structure modeling and camera registration from single image
CN110570435B (en) Method and device for carrying out damage segmentation on vehicle damage image
CN105719318A (en) Educational toy set and HSV based color identification method for Rubik's cube
CN107330861B (en) Image salient object detection method based on diffusion distance high-confidence information
CN109631766B (en) Wood board dimension measuring method based on image
CN112013921B (en) Method, device and system for acquiring water level information based on water level gauge measurement image
CN103281513B (en) Pedestrian recognition method in the supervisory control system of a kind of zero lap territory
CN114241326B (en) Progressive intelligent production method and system for ground feature elements of remote sensing images
CN109146952A (en) Estimate the method, apparatus and computer readable storage medium of compartment void volume
CN105423975A (en) Calibration system and method of large-size workpiece
CN114581786B (en) Method and device for estimating building area according to ground image
CN109523509B (en) Method and device for detecting heading stage of wheat and electronic equipment
CN116612091A (en) Construction progress automatic estimation method based on multi-view matching
CN114549780B (en) Intelligent detection method for large complex component based on point cloud data
CN110986891B (en) System for accurately and rapidly measuring crown width of tree by using unmanned aerial vehicle
CN114419113A (en) Building construction progress identification method and device and electronic equipment
CN109461137B (en) Object-oriented orthographic image quality inspection method based on gray level correlation
CN112884676A (en) Large-range aerial remote sensing image color matching method based on space division progressive control
CN113139454B (en) Road width extraction method and device based on single image
CN110599587A (en) 3D scene reconstruction technology based on single image
CN111222187A (en) Civil engineering design method with real landscape
CN111914717B (en) Data entry method and device based on meter reading data intelligent identification
CN115376119B (en) License plate recognition method and device, license plate recognition equipment and storage medium
CN114972358B (en) Artificial intelligence-based urban surveying and mapping laser point cloud offset detection method
CN113920141B (en) Rock integrity coefficient calculation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant