CN114581786A - Method and device for estimating building area according to ground image - Google Patents

Method and device for estimating building area according to ground image Download PDF

Info

Publication number
CN114581786A
CN114581786A CN202111619169.1A CN202111619169A CN114581786A CN 114581786 A CN114581786 A CN 114581786A CN 202111619169 A CN202111619169 A CN 202111619169A CN 114581786 A CN114581786 A CN 114581786A
Authority
CN
China
Prior art keywords
building
area
information
module
ground image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111619169.1A
Other languages
Chinese (zh)
Other versions
CN114581786B (en
Inventor
刘宁
杨淑港
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen City Industry Development Group Co ltd
Original Assignee
Shenzhen City Industry Development Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen City Industry Development Group Co ltd filed Critical Shenzhen City Industry Development Group Co ltd
Priority to CN202111619169.1A priority Critical patent/CN114581786B/en
Publication of CN114581786A publication Critical patent/CN114581786A/en
Application granted granted Critical
Publication of CN114581786B publication Critical patent/CN114581786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for estimating building area according to ground images. The method comprises the steps of appointing a sample building by a user in a ground image with a certain proportion requirement, finding out all areas with color characteristics from the ground image after calculating the color characteristics of the sample building, judging whether the areas are buildings according to the shapes and the sizes of the areas, then constructing building outlines according to the areas to serve as the occupied land shapes of the buildings, and finally counting the building area of each building. The method is simple to implement, convenient and quick to estimate, the estimation error can be limited within a range of 10%, and the method is particularly suitable for estimating the building area in a specific area because the building structures in the specific area have higher similarity.

Description

Method and device for estimating building area according to ground image
Technical Field
The invention relates to building area calculation and removal cost accounting.
Background
In the process of urbanization, the old house is often removed and modified. For developers, the cost estimate of the removal of old houses is very important. And the computation of the relocation cost is built on the building area. Therefore, the accuracy of the estimation of the removed building area is very important for the developers to calculate the cost before the bids. Particularly for a building company specialized in removal work, the assessment of the area of the removed building is a crucial factor for bid price establishment and is a key for profitability of the company.
The most accurate method for counting the area of the removed building is to call the house property data of the designated removed area by the house administration office for counting. However, there are several problems with this: the first problem is that it is difficult for a developer or a construction company to obtain the right to call house property data; the second problem is that the data in the bureau of housing and management is archived according to the user, not the building itself, so it needs to be arranged, which is quite troublesome, and it is easy to cause the error caused by missing data; the third problem is that the house property data of the designated area to be removed is not all listed in the house administration office due to possible illegal buildings. Thus, this method is time consuming, labor intensive, and inaccurate, and may not even be as straightforward as a field survey. However, during actual project bidding, there is typically no time to conduct a field survey. In addition, the cost of conducting the field survey itself is also a factor.
Based on the above factors, building area estimation using satellite photos or aerial photos in combination with artificial intelligence to identify buildings is a good method in the assumption. However, this method has some problems: the first problem is that the satellite images or aerial images are shot under different weather conditions and shooting time, so that the illumination difference is large, the difference between the images is large, and the building identification accuracy of artificial intelligence is not high; the second problem is that the difference between images is large due to different building types, so that the identification accuracy of the artificial intelligence to the buildings is not high; for example, roofs in some regions are covered by tiles, and roofs in some regions are covered by cement platforms; the third problem is that the building identification in the image relates to multi-target identification, so that the artificial intelligence modeling complexity is high; a fourth problem is that the artificial intelligence training itself requires a significant amount of effort.
Disclosure of Invention
The problems to be solved by the invention are as follows: and estimating the building area during the removal to account for the removal cost.
In order to solve the problems, the invention adopts the following scheme:
according to the invention, the method for estimating the building area according to the ground image comprises the following steps:
step S1: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not more than 1 meter;
step S2: presenting the ground image on a screen and then waiting for user input;
step S3: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
step S4: extracting average color information and color difference information within a range determined by a building outline of the sample building within the ground image;
step S5: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
step S6: repeating steps S4-S6 until the user input ends;
step S7: counting the building area = K sigma (s _ i h _ i) according to the collected building information; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
Further, according to the method for estimating a building area according to a ground image of the present invention, in step S7, S _ i = sa _ i + sb _ i ×; wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the ith building information, and c is a coefficient set in advance.
Further, according to the method for estimating a building area according to a ground image of the present invention, in step S5, the region is represented by a set of region edge points; the step S5 further includes a step of determining whether the area is a building; the step of judging whether the area is a building comprises the following steps:
step S51: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
step S52: then respectively constructing four contour edge lines according to the four boundary points;
step S53: whether the region edge point is located on the contour edge line or not is judged according to the distance from the region edge point to the contour edge line;
step S54: counting the proportion of the region edge points on the contour edge line in the region edge point set;
step S55: and judging whether the area is a building according to whether the proportion of the edge points of the area on the outline side line exceeds a first threshold value.
Further, according to the method for estimating a building area according to a ground image of the present invention, the step of determining whether the area is a building further includes the steps of:
step S56: calculating the area according to the four boundary points;
step S57: calculating the area of the sample building determined according to the building outline;
step S58: and judging whether the area ratio obtained in the step S56 and the step S57 exceeds a second threshold value or not, and judging whether the area is a building or not.
Further, according to the method for estimating a building area according to a ground image of the present invention, the step S4 further includes receiving the corrected color difference information input by the user; the step S5 further includes receiving the modified building information input by the user.
The device for estimating the building area according to the ground image comprises the following modules:
module M1, configured to: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not more than 1 meter;
module M2, configured to: presenting the ground image on a screen and then waiting for user input;
module M3, configured to: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
module M4, configured to: extracting average color information and color difference information within a range determined by a building outline of the sample building within the ground image;
module M5, configured to: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
module M6, configured to: repeatedly calling the modules M4-M6 until the user input is finished;
module M7, configured to: counting the building area = K sigma (s _ i h _ i) according to the collected building information; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
Further, according to the apparatus for estimating a building area from a ground image of the present invention, in the module M7, s _ i = sa _ i + sb _ i ×; wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the i-th building information.
Further, according to the apparatus for estimating a building area based on a ground image of the present invention, in the module M5, the region is represented by a set of region edge points; the module M5 further includes a module for determining whether the area is a building; the module for judging whether the area is a building comprises the following modules:
a module M51 for: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
module M52, configured to: then respectively constructing four contour edge lines according to the four boundary points;
module M53, configured to: judging whether the area edge point is positioned on the contour edge line or not according to the distance from the area edge point to the contour edge line;
module M54, configured to: counting the proportion of the region edge points on the contour edge line in the region edge point set;
module M55, configured to: and judging whether the area is a building or not according to whether the proportion of the edge points of the area on the outline side line exceeds a first threshold value or not.
Further, according to the apparatus for estimating a building area according to a ground image of the present invention, the module for determining whether the area is a building further includes the following modules:
module M56, configured to: calculating the area according to the four boundary points;
module M57, configured to: calculating the area of the sample building determined according to the building outline;
module M58, configured to: and judging whether the area ratio obtained by the module M56 and the module M57 exceeds a second threshold value or not, and judging whether the area is a building or not.
Further, according to the apparatus for estimating a building area according to a ground image of the present invention, the module M4 further comprises receiving the corrected color difference information input by the user; the module M5 also includes receiving revised building information entered by the user.
The invention has the following technical effects: the method of the invention has simple implementation and convenient and rapid estimation, the estimation error can be limited within a range of 10 percent, and the method is particularly suitable for the estimation of the building area in a specific area because the building structures in the specific area have larger similarity.
Drawings
Fig. 1, 2 and 3 are exemplary explanatory diagrams of the process of the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The method for estimating the building area according to the ground image is a method realized by executing a program instruction set through a machine, and is a method for estimating the building area by combining UI interaction and machine calculation. The method comprises the following steps:
step S1: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not less than 1 meter;
step S2: presenting the ground image on a screen and then waiting for user input;
step S3: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
step S4: extracting average color information and color difference information within a range determined by the building outline of the sample building in the ground image;
step S5: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
step S6: repeating steps S4-S6 until the user input ends;
step S7: and (3) counting the building area according to the collected building information: s = K sigma (S _ i h _ i).
In step S7, sigma represents accumulation, S _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the number of floors determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
Step S1 indicates that the input to the present invention is a ground image, which is typically obtained by satellite photography or obtained by aerial photography. The ground image required to be input by the invention needs to have the requirement on resolution, and specifically, the ground size corresponding to each pixel is not more than 1 meter. The smaller the ground size corresponding to each pixel is, the higher the resolution of the ground image is represented. The size of each pixel is about 0.265 mm at a resolution of typically 96 screen DPI. The ground size corresponding to each pixel is not less than 1 meter, i.e. the image is displayed on a scale of about 1 cm to 38 meters under a screen with a DPI of 96. Currently, the display scale of GooleEarth software or other satellite map software is generally 1 cm to 20 m at most. That is, the ground image input by the present invention can be obtained from the high-resolution satellite map of these software in practical applications. The floor image of a particular village shown in the example of fig. 1 is from the gooeleearth software, and is shown on a scale of 1 cm to 20 m, which corresponds to about 0.52 m floor size for each pixel. "the ground size corresponding to each pixel of the ground image is not less than 1 meter" means that if the ground size corresponding to each pixel of the ground image is greater than 1 meter, the error of the method according to the invention is too large.
In addition, the ground image obtained in step S1 generally needs to be subjected to a denoising process. That is, the ground image required to be input by the present invention is preferably a ground image subjected to a denoising process. Of course, those skilled in the art will understand that the denoising process may be performed in step S1 after the ground image is obtained. The denoising process of the ground image can generally adopt gaussian filtering. Gaussian filtering is well known to those skilled in the art and will not be described in detail herein.
Steps S2 to S6 are a procedure of collective UI interaction. Step S2 is to display the ground image on the screen of the display so as to be able to interact with the user. Step S3 specifically receives a user' S designation of a building on the ground image. For example, in the example of fig. 1, the area designated by the quadrangle 11 is a building designated by the user. In the present invention, the building designated by the user is referred to as a sample building. In the example of fig. 1, the quadrilateral 11 represents the outline of a building that is a sample building. The building outline represents the floor area shape of the building, the floor area of the corresponding building can be calculated according to the area of the range defined by the building outline and the reference proportion, and the building area = floor area x floor height of the building can be calculated by combining the floor height of the building.
Obviously, the height of a certain building cannot be determined from the ground image, and the number of floors of the building cannot be determined. Although there is some inspiring effect on the sunlight shadow of the building, if the building is densely arranged, the sunlight shadow is not complete, and accordingly, the layer height cannot be determined. But the types of buildings and the heights of the building floors are not very many for buildings in a certain designated area. For example, in the village illustrated in fig. 1, there are generally two types of buildings: the first is the older building and the second is the more recent building. Older building story height is mostly single-storey building, and more recent building story height is mostly three-storey or four-storey building. The statistics of the building types and the building layer heights can be roughly known through field investigation, and the statistics can be accurately known by combining the sunlight shadow length of the buildings on the ground images.
Therefore, in the sample building specified by the user, the user can determine the building type and specify the building type in step S3. The floor height of the building can thus be determined from the building type in a final step S7. The correspondence relation between the building type and the building story height may be configured and input by the user in advance. Of course, it is understood by those skilled in the art that the building type may also be represented by a floor height number, such as 1, 2, 3, 4.
Steps S4 and S5 are machine learning analysis processes. The artificial intelligence method is to construct model characteristic data used for characterizing features through training and then carry out analysis and judgment based on the model characteristic data. In the method of the present invention, the model feature data used for characterizing features are average color information and color difference information within a contour range, that is, step S4 is to construct model feature data of a sample building from the sample building. Step S5 finds out a similar area from the ground image according to the color-average information and the color difference information as the model feature data, and determines whether it is a building. The colors may be represented by three colors of RGB, or may be represented by three dimensions of hue, saturation, and lightness. In this embodiment, colors are represented in three dimensions by hue, saturation, and lightness. That is, the average color information is calculated by the average hue, the average saturation and the average lightness; the color difference information calculates the color difference, the saturation difference and the brightness difference. The hue difference, saturation difference, and lightness difference here may be a variance, particularly a standard variance, or a maximum value difference.
For example, in fig. 1, the quadrangle 11 is the outline of the building input by the user and received through step S3. The average color information and the color difference information of the ground image within the outline of the building represented by the quadrangle 11 can be calculated through step S4. Step S5 is to find all the areas in the ground image within the range determined by the average color information and the color difference information. The range determined by the average color information and the color difference information is the color range. For example, in the example of fig. 1, the average color information in the building outline range represented by the quadrangle 11 is: the average hue was 60, the average saturation was 15%, and the average lightness was 92%; the color difference information is: the hue difference was 5, the saturation difference was 3%, and the lightness difference was 4%. The color range thus determined from the average color information and the color difference information is: the hue range is 55-65, the saturation range is 12-18%, and the lightness range is 88-96%. This makes it possible to find regions in the ground image that lie within this color range. For example, the buildings enclosed by circle 11 in the example of FIG. 1 conform to the color range described above, and the resulting area is illustrated in FIG. 2. Therefore, building outlines are constructed according to the areas illustrated in fig. 2, and corresponding building information can be constructed by combining the building types of the sample buildings. The region boundary illustrated in fig. 2 has an irregular shape, and is a region edge point set composed of points, which are referred to as region edge points or region boundary points.
Obviously, the building information constructed through steps S4 and S5 is based on the sample building of step S3. Thus, all the image information in the ground image cannot be found from one building sample, and therefore, it is required that the user continues to re-specify the sample building, thereby repeating the steps S3, S4, and S5 to find all the building information in the ground image. When the user input is finished, the user does not continuously designate a sample building, and all buildings in the ground image are marked, so that the area = K sigma (s _ i h _ i) of the building can be summarized and counted; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel. Here the statistics for S _ i building information come from the sample building specified by the user at step S3 and the buildings found by the machines at steps S4 and S5.
In addition, in order to avoid interference of other ground images on the ground image, the present embodiment introduces a determination as to whether the area is a building when building information is constructed in step S5, and constructs corresponding building information only when the area is a building. Specifically, the area is represented by a set of area edge points, and it is then determined whether the area is a building by the area shape and the area size.
The method for judging whether the area is a building or not according to the area shape comprises the following steps:
step S51: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
step S52: then respectively constructing four contour edge lines according to the four boundary points;
step S53: judging whether the edge points of the area are positioned on the contour edge line;
step S54: counting the proportion of the edge points of the region in the set of the edge points of the region on the contour edge line;
step S55: and judging whether the area is a building or not according to the proportion of the edge points of the area on the outline border.
Taking the area illustrated in fig. 2 as an example, the area illustrated in fig. 2 can find four boundary points: an upper left boundary point TL, an upper right boundary point TR, a lower right boundary point BL, and a lower left boundary point BR. Then in step S53, based on the four boundary points: four contour lines can be constructed by the upper left boundary point TL, the upper right boundary point TR, the lower right boundary point BL and the lower left boundary point BR, and are respectively: a line connecting the upper left boundary point TL and the upper right boundary point TR, a line connecting the upper left boundary point TL and the lower right boundary point BL, a line connecting the upper right boundary point TR and the lower left boundary point BR, and a line connecting the lower right boundary point BL and the lower left boundary point BR refer to the outline boundary line 30 in the example of fig. 3.
In step S54, "determine whether the edge point of the area is located on the contour line" adopts a distance calculation method: and calculating the distances from the edge points of the region to the four contour edge lines respectively, selecting the minimum distance, and finally judging whether the edge points of the region are on the contour edge lines according to whether the minimum distance is smaller than a distance threshold. The distance threshold here is preferably 2 to 3 pixels in the present embodiment. For example, in the region illustrated in fig. 2, the edge points of the region marked by the circle 21 and the circle 22 are not located on the contour edge line, and the edge points of the other region are located on the contour edge line.
In step S55, "whether or not the area is a building is determined based on the ratio of the area edge point on the contour line" adopts a threshold determination method, that is, the ratio of the area edge point on the contour line is first calculated, and then it is determined whether or not the area is a building based on whether or not the ratio exceeds a first threshold. The first threshold value is preferably 70% in this embodiment. Specifically, if the ratio exceeds 70%, the area is determined to be a building, otherwise the area is not a building.
The method for judging whether the area is a building or not according to the area size comprises the following steps:
step S56: calculating the area according to the four boundary points;
step S57: calculating the area of the sample building determined according to the building outline;
step S58: and judging whether the area is a building or not according to the area ratio obtained in the step S56 and the step S57.
The four boundary points in step S56 are also the four boundary points found in step S51. The area calculated at step S56 is the area of the building found by the machine at steps S4 and S5, and step S57 is the area of the sample building specified by the user at step S3. The areas in steps S56 and S57 are, in this embodiment, the areas of the buildings in the ground image, specifically the number of pixels defined by the outline or the boundary, rather than the floor space of the actual buildings. Of course, those skilled in the art will understand that the areas in steps S56 and S57 may be the actual floor area of the control.
In step S58, "whether or not the area is a building is judged from the area ratio obtained in both step S56 and step S57" employs a threshold judgment method, that is, the area ratio obtained in both step S56 and step S57 is first calculated, and then whether or not the area is a building is judged from whether or not the ratio exceeds a second threshold. The second threshold value is a range, and the range of 0.5 to 2 is preferable in the present embodiment. Specifically, if the ratio is between 0.5 and 2, the area is a building, otherwise the area is not a building.
According to the above, the present embodiment is a building area estimation method implemented by combining UI interaction and machine calculation. For convenience of UI interaction on one hand and considering that the building can be occluded by other obstacles on the other hand, the present embodiment takes a polygon as the building outline in step S3 and takes a quadrangle with four boundary points as vertices in step S5. After the machine constructs a quadrangle as a building outline through step S5, the user can edit the building outline of the quadrangle. For example, the user may modify the building by adding vertices, so that the building outline may be pentagonal or hexagonal or even more; or the user may adjust the manner in which the positions of the vertices of the polygon are changed. In addition, the user can adjust the building type to modify. That is, in the present embodiment, step S5 further includes a step of receiving the corrected building information input by the user.
Further, the present embodiment is used as a UI interactive system, and in step S4, after the machine calculates the average color information and the color difference information within the range determined by the building outline of the sample building, the user can also correct the color difference information. That is, step S4 includes receiving the corrected color difference information input by the user.
Further, in consideration of the error problem, in the present embodiment, the number of pixels S _ i defined by the building outline in the i-th building information in step S7 is calculated using the following formula: s _ i = sa _ i + sb _ i ×.c. Wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the ith building information, and c is a coefficient set in advance. In this embodiment, the preset coefficient c can be edited by the user as the UI interactive system, and is generally 0.5 to 2.0, and is initialized to 1.0.

Claims (10)

1. A method for estimating a building area based on a ground image, comprising the steps of:
step S1: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not more than 1 meter;
step S2: presenting the ground image on a screen and then waiting for user input;
step S3: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
step S4: extracting average color information and color difference information within a range determined by a building outline of the sample building within the ground image;
step S5: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
step S6: repeating steps S4-S6 until the user input ends;
step S7: counting the building area = K sigma (s _ i h _ i) according to the collected building information; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
2. The method according to claim 1, wherein in step S7, S _ i = sa _ i + sb _ i ×; wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the ith building information, and c is a coefficient set in advance.
3. The method for estimating a building area according to a ground image as claimed in claim 1, wherein the region is represented by a set of region edge points in the step S5; the step S5 further includes a step of determining whether the area is a building; the step of judging whether the area is a building comprises the following steps:
step S51: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
step S52: then respectively constructing four contour edge lines according to the four boundary points;
step S53: judging whether the area edge point is positioned on the contour edge line or not according to the distance from the area edge point to the contour edge line;
step S54: counting the proportion of the region edge points on the contour edge line in the region edge point set;
step S55: and judging whether the area is a building according to whether the proportion of the edge points of the area on the outline side line exceeds a first threshold value.
4. The method of claim 3, wherein the step of determining whether the area is a building further comprises the steps of:
step S56: calculating the area according to the four boundary points;
step S57: calculating the area of the sample building determined according to the building outline;
step S58: and judging whether the area ratio obtained in the step S56 and the step S57 exceeds a second threshold value or not, and judging whether the area is a building or not.
5. The method according to claim 1, wherein the step S4 further comprises receiving the corrected color difference information input by the user; the step S5 further includes receiving the modified building information input by the user.
6. An apparatus for estimating a building area based on a ground image, comprising:
module M1, configured to: acquiring a ground image shot by a satellite or an aerial photo, wherein the ground size corresponding to each pixel of the ground image is not more than 1 meter;
module M2, configured to: presenting the ground image on a screen and then waiting for user input;
module M3, configured to: acquiring building information of a sample building appointed by a user on a screen through user input; the building information comprises a building outline and a building type;
a module M4 for: extracting average color information and color difference information within a range determined by a building outline of the sample building within the ground image;
a module M5 for: traversing the ground image according to the average color information and the color difference information, finding out all areas in the ground image within the range determined by the average color information and the color difference information, and constructing corresponding building information;
module M6, configured to: repeatedly calling the modules M4-M6 until the user input is finished;
module M7, configured to: counting the building area = K sigma (s _ i h _ i) according to the collected building information; wherein sigma represents accumulation, s _ i represents the number of pixels defined by the building outline in the ith building information, and h _ i represents the floor number determined by the building type in the ith building information; k is the ground scale corresponding to each pixel.
7. The apparatus for estimating building area according to ground image as claimed in claim 6, wherein in the module M7, s _ i = sa _ i + sb _ i; wherein sa _ i is the number of pixels in a range defined by the building outline in the ith building information; sb _ i is the number of pixels of the building outline itself in the ith building information, and c is a coefficient set in advance.
8. The apparatus for estimating a building area according to a ground image as claimed in claim 6, wherein in the module M5, the region is represented by a set of region edge points; the module M5 further includes a module for determining whether the area is a building; the module for judging whether the area is a building comprises the following modules:
module M51, configured to: four boundary points are obtained from the set of region edge points: an upper left boundary point, an upper right boundary point, a lower right boundary point and a lower left boundary point;
module M52, configured to: then respectively constructing four contour edge lines according to the four boundary points;
module M53: judging whether the area edge point is positioned on the contour edge line or not according to the distance from the area edge point to the contour edge line;
module M54, configured to: counting the proportion of the region edge points on the contour edge line in the region edge point set;
module M55, configured to: and judging whether the area is a building according to whether the proportion of the edge points of the area on the outline side line exceeds a first threshold value.
9. The apparatus for estimating an area of a building according to a ground image as claimed in claim 8, wherein the module for determining whether the area is a building further comprises the modules for:
module M56, configured to: calculating the area according to the four boundary points;
module M57, configured to: calculating the area of the sample building determined according to the building outline;
module M58, configured to: and judging whether the area ratio obtained by the module M56 and the module M57 exceeds a second threshold value or not, and judging whether the area is a building or not.
10. The apparatus for estimating building area according to ground image as claimed in claim 6, wherein said module M4 further comprises receiving the corrected color difference information inputted by the user; the module M5 also includes receiving revised building information entered by the user.
CN202111619169.1A 2021-12-28 2021-12-28 Method and device for estimating building area according to ground image Active CN114581786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111619169.1A CN114581786B (en) 2021-12-28 2021-12-28 Method and device for estimating building area according to ground image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111619169.1A CN114581786B (en) 2021-12-28 2021-12-28 Method and device for estimating building area according to ground image

Publications (2)

Publication Number Publication Date
CN114581786A true CN114581786A (en) 2022-06-03
CN114581786B CN114581786B (en) 2022-11-25

Family

ID=81769575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111619169.1A Active CN114581786B (en) 2021-12-28 2021-12-28 Method and device for estimating building area according to ground image

Country Status (1)

Country Link
CN (1) CN114581786B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013116793A2 (en) * 2012-02-03 2013-08-08 Eagle View Technologies, Inc Systems and methods for estimation of building floor area
CN103699900A (en) * 2014-01-03 2014-04-02 西北工业大学 Automatic batch extraction method for horizontal vector contour of building in satellite image
CN103837537A (en) * 2014-03-17 2014-06-04 武汉大学 Sand-free metal inkjet printing plate lattice point area rate measurement method
CN104463868A (en) * 2014-12-05 2015-03-25 北京师范大学 Rapid building height obtaining method based on parameter-free high-resolution image
CN105283884A (en) * 2013-03-13 2016-01-27 柯法克斯公司 Classifying objects in digital images captured using mobile devices
CN108052876A (en) * 2017-11-28 2018-05-18 广东数相智能科技有限公司 Regional development appraisal procedure and device based on image identification
CN108228651A (en) * 2016-12-21 2018-06-29 青岛祥智电子技术有限公司 A kind of GIS-Geographic Information System
CN109034073A (en) * 2018-07-30 2018-12-18 深圳大学 Predict method, system, equipment and the medium of building demolition waste yield
US10339646B1 (en) * 2019-01-14 2019-07-02 Sourcewater, Inc. Image processing of aerial imagery for energy infrastructure analysis using pre-processing image selection
CN110570521A (en) * 2019-09-10 2019-12-13 同济大学 urban ground roughness calculation method
JP2020016923A (en) * 2018-07-23 2020-01-30 株式会社パスコ House transfer estimation apparatus and program
US20200318962A1 (en) * 2019-04-05 2020-10-08 Ikegps Group Limited Methods of measuring structures
CN113128819A (en) * 2020-01-16 2021-07-16 江苏和网源电气有限公司 Distributed photovoltaic system software design method based on Google map API
CN113516135A (en) * 2021-06-23 2021-10-19 江苏师范大学 Remote sensing image building extraction and contour optimization method based on deep learning

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013116793A2 (en) * 2012-02-03 2013-08-08 Eagle View Technologies, Inc Systems and methods for estimation of building floor area
CN105283884A (en) * 2013-03-13 2016-01-27 柯法克斯公司 Classifying objects in digital images captured using mobile devices
CN103699900A (en) * 2014-01-03 2014-04-02 西北工业大学 Automatic batch extraction method for horizontal vector contour of building in satellite image
CN103837537A (en) * 2014-03-17 2014-06-04 武汉大学 Sand-free metal inkjet printing plate lattice point area rate measurement method
CN104463868A (en) * 2014-12-05 2015-03-25 北京师范大学 Rapid building height obtaining method based on parameter-free high-resolution image
CN108228651A (en) * 2016-12-21 2018-06-29 青岛祥智电子技术有限公司 A kind of GIS-Geographic Information System
CN108052876A (en) * 2017-11-28 2018-05-18 广东数相智能科技有限公司 Regional development appraisal procedure and device based on image identification
JP2020016923A (en) * 2018-07-23 2020-01-30 株式会社パスコ House transfer estimation apparatus and program
CN109034073A (en) * 2018-07-30 2018-12-18 深圳大学 Predict method, system, equipment and the medium of building demolition waste yield
US10339646B1 (en) * 2019-01-14 2019-07-02 Sourcewater, Inc. Image processing of aerial imagery for energy infrastructure analysis using pre-processing image selection
US20200318962A1 (en) * 2019-04-05 2020-10-08 Ikegps Group Limited Methods of measuring structures
CN110570521A (en) * 2019-09-10 2019-12-13 同济大学 urban ground roughness calculation method
CN113128819A (en) * 2020-01-16 2021-07-16 江苏和网源电气有限公司 Distributed photovoltaic system software design method based on Google map API
CN113516135A (en) * 2021-06-23 2021-10-19 江苏师范大学 Remote sensing image building extraction and contour optimization method based on deep learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JUN CHEN 等: ""Building Area Estimation in Drone Aerial Images Based on Mask R-CNN"", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *
KARAGIANNI A. 等: ""TERRESTRIAL LASER SCANNING AND SATELLITE DATA IN CULTURAL HERITAGE BUILDING DOCUMENTATION"", 《THE INTERNATIONAL ARCHIVES OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES》 *
刘博: ""基于电子地图的建筑物大小估算方法研究"", 《中国优秀硕士学位论文全文数据库 (基础科学辑)》 *
刘浩: ""基于高分辨率遥感影像的人工地物提取研究"", 《中国优秀硕士学位论文全文数据库 (基础科学辑)》 *
李印鑫: ""地理信息***建筑物三维重建技术研究与应用"", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 *

Also Published As

Publication number Publication date
CN114581786B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
JP4319857B2 (en) How to create a map
CN105719318A (en) Educational toy set and HSV based color identification method for Rubik's cube
CN107330861B (en) Image salient object detection method based on diffusion distance high-confidence information
CN108960135A (en) Intensive Ship Target accurate detecting method based on High spatial resolution remote sensing
WO2020052352A1 (en) Method and device for damage segmentation of vehicle damage image
CN112013921B (en) Method, device and system for acquiring water level information based on water level gauge measurement image
JP4521568B2 (en) Corresponding point search method, relative orientation method, three-dimensional image measurement method, corresponding point search device, relative orientation device, three-dimensional image measurement device, corresponding point search program, and computer-readable recording medium recording the corresponding point search program
CN109631766A (en) A kind of wood plank dimension measurement method based on image
CN114241326B (en) Progressive intelligent production method and system for ground feature elements of remote sensing images
CN113569647B (en) AIS-based ship high-precision coordinate mapping method
CN114692991A (en) Wolfberry yield prediction method and system based on deep learning
CN114581786B (en) Method and device for estimating building area according to ground image
CN112598367A (en) Engineering project construction process monitoring method and system, intelligent terminal and storage medium
Fursov et al. Correction of distortions in color images based on parametric identification
CN114549780B (en) Intelligent detection method for large complex component based on point cloud data
CN113610782B (en) Building deformation monitoring method, equipment and storage medium
CN110986891B (en) System for accurately and rapidly measuring crown width of tree by using unmanned aerial vehicle
CN114419113A (en) Building construction progress identification method and device and electronic equipment
CN109461137B (en) Object-oriented orthographic image quality inspection method based on gray level correlation
CN113658239A (en) Building construction progress identification method and device, electronic equipment and system
CN112906469A (en) Fire-fighting sensor and alarm equipment identification method based on building plan
CN115376119B (en) License plate recognition method and device, license plate recognition equipment and storage medium
CN113139454B (en) Road width extraction method and device based on single image
CN110599587A (en) 3D scene reconstruction technology based on single image
CN113902880B (en) Construction production auxiliary method and device based on augmented reality technology and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant