CN112183301B - Intelligent building floor identification method and device - Google Patents

Intelligent building floor identification method and device Download PDF

Info

Publication number
CN112183301B
CN112183301B CN202011013179.6A CN202011013179A CN112183301B CN 112183301 B CN112183301 B CN 112183301B CN 202011013179 A CN202011013179 A CN 202011013179A CN 112183301 B CN112183301 B CN 112183301B
Authority
CN
China
Prior art keywords
building
windows
window
suggestion
floor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011013179.6A
Other languages
Chinese (zh)
Other versions
CN112183301A (en
Inventor
高云龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhai Dashi Intelligence Technology Co ltd
Original Assignee
Wuhai Dashi Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhai Dashi Intelligence Technology Co ltd filed Critical Wuhai Dashi Intelligence Technology Co ltd
Priority to CN202011013179.6A priority Critical patent/CN112183301B/en
Publication of CN112183301A publication Critical patent/CN112183301A/en
Application granted granted Critical
Publication of CN112183301B publication Critical patent/CN112183301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a building floor intelligent identification method and a device, wherein the method comprises the following steps: analyzing the live-action three-dimensional data, and calculating the minimum circumscribed rectangle of the building on the two-dimensional plane; projecting the elevation of the building onto the four constructed images, and combining to generate a complete building elevation image; after manually marking the building facade image through a data marking tool, inputting a marked sample into a feature extraction network, performing upper and lower layer feature fusion through an FPN network, connecting a feature enhancement structure, generating a suggestion window through a regional suggestion network, and combining the suggestion windows of all levels to obtain a window mask; and extracting the vertical columns of the windows according to the arrangement of the windows in the horizontal direction, and obtaining the building floor height based on the average height difference of the vertical columns of the windows so as to determine the building floor position. The problem of poor robustness of the traditional floor recognition method is solved through the scheme, the robustness of window extraction and floor recognition can be enhanced, and the detection recognition accuracy is ensured.

Description

Intelligent building floor identification method and device
Technical Field
The invention relates to the field of building analysis of live-action three-dimensional data, in particular to an intelligent building floor identification method and device.
Background
The building is taken as an important carrier for human life, the floor information describes the detail characteristics of the building, and the building can provide fine services for applications such as smart cities and the like. Typically the window structure is consistent with the building floors, i.e. one floor corresponds to a row of windows, and thus based on window information is an efficient way of performing floor extraction. The building in the real-scene three-dimensional data is consistent with the real world in shape, and different windows of the building are in the same size, so that the difficulty in identifying the windows can be greatly reduced.
At present, some researches perform window extraction based on manually designed window feature rules, so that floor recognition can be basically realized, but the accuracy is often limited to complex and changeable scenes, and the robustness of extraction is poor by adopting a traditional method due to structural factors such as window types, sizes and the like and the influence of a data acquisition mode.
Disclosure of Invention
In view of the above, the embodiment of the invention provides an intelligent building floor identification method and device, which are used for solving the problem of poor robustness of the existing floor identification method based on window extraction.
In a first aspect of an embodiment of the present invention, there is provided a building floor intelligent identification method, including:
analyzing the live-action three-dimensional data, obtaining a patch cluster of a single building, and calculating the minimum circumscribed rectangle of the single building on a two-dimensional plane;
respectively taking four sides of the minimum circumscribed rectangle as the width of the image, taking the height of the single building as the height of the image to construct four images, respectively projecting the vertical face of the single building onto the four images, and combining to generate a complete building vertical face image;
after manually marking a building elevation image through a data marking tool, inputting a marked sample into a feature extraction network, performing upper and lower layer feature fusion through an FPN network, connecting a Bottom-up feature enhancement structure, generating a suggestion window through a regional suggestion network, and combining suggestion windows of all levels to obtain a window mask;
and extracting the vertical columns of the windows according to the arrangement of the windows in the horizontal direction, and obtaining the building floor height based on the average height difference of the vertical columns of the windows so as to determine the building floor position.
In a second aspect of the embodiments of the present invention, there is provided an apparatus for intelligent identification of building floors, comprising:
the analysis module is used for analyzing the live-action three-dimensional data, obtaining the patch clusters of the single building, and calculating the minimum circumscribed rectangle of the single building on the two-dimensional plane;
the projection module is used for respectively taking four edges of the minimum circumscribed rectangle as the width of the image, taking the height of the single building as the height of the image to construct four images, respectively projecting the vertical face of the single building onto the four images, and combining the four images to generate a complete building vertical face image;
the generation module is used for manually marking the building elevation image through the data marking tool, inputting the marked sample into the feature extraction network, carrying out upper and lower layer feature fusion through the FPN network, connecting a Bottom-up feature enhancement structure, generating a suggestion window through the regional suggestion network, and combining the suggestion windows of all levels to obtain a window mask;
and the calculation module is used for extracting the vertical columns of the windows according to the arrangement of the windows in the horizontal direction, and obtaining the building floor height based on the average height difference of the vertical columns of the windows so as to determine the building floor position.
In a third aspect of the embodiments of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect of the embodiments of the present invention when the computer program is executed by the processor.
In a fourth aspect of the embodiments of the present invention, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method provided by the first aspect of the embodiments of the present invention.
In the embodiment of the invention, two-dimensional projection is carried out on live-action three-dimensional data, window extraction is carried out on building elevation images based on an improved Mask-RCNN frame, a plurality of masks are generated, and then average height difference is calculated through pixel statistics in the horizontal direction and the vertical direction to determine the floor height, and the floor position is optimized. Floor recognition can be performed on various real-scene three-dimensional data, the method has strong robustness and high accuracy, and the problem of poor robustness of a traditional floor recognition method is solved. Meanwhile, the identification and extraction of the building window are not required to be carried out by artificial design characteristics, and the automation degree of window extraction is improved. The floor recognition fault tolerance rate is high, and the correct floor can be effectively extracted under the condition that the window recognition is incomplete.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings described below are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a building floor intelligent identification method according to an embodiment of the invention;
fig. 2 is another flow chart of a building floor intelligent recognition method according to an embodiment of the present invention;
FIG. 3 is a schematic view of a feature enhancement structure according to an embodiment of the present invention;
FIG. 4 is an expanded schematic view of a building in elevation provided in accordance with one embodiment of the present invention;
FIG. 5 is a schematic view of a building window recognition result according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a window rank pixel accumulation result according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a floor recognition result according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an apparatus for intelligent building floor recognition according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art without making any inventive effort, based on the embodiments of the present invention will be made in the light of the following description of the principles and features of the present invention with reference to the accompanying drawings, the examples being given for the purpose of illustrating the invention only and not for the purpose of limiting the scope of the invention.
The term "comprising" in the description of the invention or in the claims and in the above-mentioned figures and other similar meaning expressions is meant to cover a non-exclusive inclusion, such as a process, method or system, apparatus comprising a series of steps or elements, without limitation to the steps or elements listed.
The implementation process of the invention is shown in fig. 1, and the floor position of a single building can be determined based on building live-action three-dimensional data.
It should be noted that, general live-action three-dimensional data contains abundant texture and three-dimensional structure information, and accurate restoration can be performed on a building, however, because the data is generated by unmanned aerial vehicle images through an algorithm, shape deformation can not occur, and the robustness of floor extraction directly acting on a building model is not high. Thus, for the present invention, computing floor information from two-dimensional images via building facade texture mapping is a better way. Windows on buildings are the features that best identify floors, which fit the human visual perception model. Building floor information can be accurately identified through elevation texture mapping, image extraction and floor height identification.
The acquisition of a building facade first calculates the correct building direction, resulting in a texture image that is substantially parallel to the building facade. Building outlines are generally rectangular, so directly taking the strip-oriented outsourced rectangle is an effective treatment. The four sides of the outer rectangle are taken as the bottom, the height of the building is high, and the three-dimensional model is projected onto four planes, so that an accurate elevation image can be obtained. The front-back distance relation of the three-dimensional model projection is considered, and the elevation texture close to each plane is obtained.
Window recognition is an example segmentation task in computer vision, a fine target detection mode, requiring classification and localization at the pixel level. Mask-RCNN is taken as a typical example to divide a network, and the flow comprises the following steps: firstly, an input image passes through a feature extraction network to obtain multi-scale feature information; generating candidate regions through a region suggestion network (Region Proposal Network, RPN), optimizing the number of candidates by using a Non-maximum suppression algorithm (Non-Maximum Suppression, NMS); then, the features contained in the candidate region are calculated for classification, coordinate regression of the frame, and Mask (Mask) generation. The window extraction method provided by the invention is improved on the basis of the Mask-RCNN basic network, so that the feature description capability is enhanced, and the recognition accuracy can be further improved.
Floor identification takes into account the distribution structure of the windows. Typically, the windows of the building are aligned. According to the vertical arrangement of the windows, the height difference between the upper and lower adjacent windows is regarded as the floor height. Since the window recognition can be missed and wrong, consideration needs to be given to the situation that detected upper and lower windows are separated by several layers. After calculating the floor height, the actual height of each floor is determined by using the building height and the horizontal position arrangement of the windows, and since some windows of the building are positioned in the corridor and are not actual floor positions, the optimal position height needs to be calculated optimally.
Specifically, referring to fig. 2, fig. 2 is a schematic flow chart of a building floor intelligent recognition method according to an embodiment of the present invention, including:
s201, analyzing the live-action three-dimensional data, obtaining a patch cluster of a single building, and calculating the minimum circumscribed rectangle of the single building on a two-dimensional plane;
the real-scene three-dimensional data consists of a three-dimensional surface patch model and texture images, after surface patch clusters of a single building are obtained, the clusters are calculated to be projected in an X-Y plane to obtain a two-dimensional outline boundary of the building, and the minimum bounding rectangle of the outline boundary is calculated by utilizing the characteristic that the building is generally rectangular. Therefore, the acquired elevation image is basically parallel to the actual elevation of the building, deformation factors of windows can be greatly reduced, and the sizes of the windows of different buildings are basically in uniform sizes.
S202, respectively taking four sides of a minimum circumscribed rectangle as the width of an image, respectively taking the height of the single building as the height of the image to construct four images, respectively projecting the vertical face of the single building onto the four images, and combining to generate a complete building vertical face image;
the building elevation image, namely the building elevation texture image, is formed by respectively projecting a building model into planes taking four sides of an outsourcing rectangle as X directions and taking a vertical ground direction as Y directions.
Specifically, calculating the coverage range of each triangular patch in the three-dimensional data of the single building live-action and the distance from the triangular patch to the projection plane, comparing the distance values stored by pixels in the coverage range one by one, replacing the pixels with closer distances with the values corresponding to the textures of the triangular patches, and updating the distance values corresponding to the pixels; and (5) circularly calculating all the triangular patches to obtain a final elevation image. The four vertical face images are transversely spliced together, so that the maximum window information can be obtained.
S203, after manually marking the building elevation image through a data marking tool, inputting a marked sample into a feature extraction network, performing upper and lower layer feature fusion through an FPN network, connecting a feature enhancement structure, generating a suggestion window through a regional suggestion network, and combining the suggestion windows of all levels to obtain a window mask;
window data is manually marked with VIA data markers, and a plurality of window masks on each image are recorded.
The feature enhancement structure is a Bottom-up Bottom-to-top Bottom-up feature enhancement structure. The FPN (Feature Pyramid Networks) network is a deep learning network for detecting objects with different scales, and is also connected with a Bottom-up feature enhancement structure besides the traditional FPN network structure in the feature extraction network, so that the feature extraction network and a regional suggestion network (RPN) can be trained based on marked sample data, and suggestion windows corresponding to building windows can be generated.
It should be noted that the invention introduces a Bottom-up (Bottom-up Path Augmentation) feature enhancement structure based on the original Mask-RCNN model, i.e. a Bottom-up feature enhancement structure, and fuses the feature information of each layer to predict the window result.
Specifically, the feature extraction network is utilized to perform convolutional layer feature extraction. Extracting initial features as a depth residual error network (such as ResNet 50), and continuously reducing an image size expansion receptive field; a feature pyramid network (Feature Pyramid Networks, FPN) is then added to connect the feature information at each resolution.
To further adapt to the diversity of targets and enhance the information propagation capability of different levels, a Bottom-to-top (Bottom-up Path Augmentation) feature enhancement structure is added. Here we can see the reverse FPN, each layer is specifically calculated as shown in FIG. 3, and the previous layer feature L i After the convolution and pooling operation, the method comprises the steps of,connecting features F of the same resolution in FPN i+1 Obtaining L i+1
And generating a suggestion window by the RPN network, fusing different-level features to generate a feature map with a fixed size, and then classifying, positioning frames and performing mask regression.
S204, extracting vertical columns of the windows according to the arrangement of the windows in the horizontal direction, and obtaining the building floor height based on the average height difference of the vertical columns of the windows so as to determine the building floor position.
It will be appreciated that building level is expressed as the difference in height between the upper and lower windows, and therefore, accurate finding a pair of upper and lower windows is a key to calculating the level. Specifically, the number of pixels with window masks in each row and each column on a building elevation image is recorded, the pixel accumulation values of windows in the horizontal direction and the vertical direction of a building are counted, the column in which the local accumulation maximum value in the vertical direction is located is taken, the central line of the window in the vertical direction is calculated, the upper window pair and the lower window pair are determined, and the average value of the height difference of each pair of windows is used as the layer height.
In view of the problem of missing or false detection that may occur in window extraction, some outliers need to be removed, so the average value of each pair of window height differences can be taken as the layer height. The local cumulative maximum in the row direction is expressed as the center line of a row of windows in the horizontal direction, and this is taken as the intermediate position of each layer.
Since some buildings may have windows in the middle of the corridor, incorrect floor position calculations may be obtained. Preferably, several possible floor height position distributions are calculated, guided by the previously calculated floor heights, to include as correct floor positions the distribution of the maximum number of windows.
In one embodiment, the building facade expanded image is shown in fig. 4, the window recognition result is shown in fig. 5, row-column direction pixel accumulation statistics of the window recognition result are shown in fig. 6, and the final floor recognition result is shown in fig. 7.
By the method provided by the embodiment, the floor information of the building can be accurately identified, the method is suitable for various live-action three-dimensional texture images, and has the advantages of high robustness and high fault tolerance.
It should be understood that the sequence number of each step in the above embodiment does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not be construed as limiting the implementation process of the embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a device for intelligent building floor identification according to an embodiment of the present invention, where the device includes:
the parsing module 810 is configured to parse the live-action three-dimensional data, obtain a patch cluster of a single building, and calculate a minimum circumscribed rectangle of the single building on a two-dimensional plane;
the projection module 820 is configured to construct four images with the four sides of the minimum circumscribed rectangle as the widths of the images, the height of the single building as the height of the images, and project the vertical surfaces of the single building onto the four images, respectively, and combine the four images to generate a complete building vertical surface image;
the method comprises the steps of calculating the coverage range of each triangular patch in the three-dimensional data of a single building in a projection plane and the distance between the triangular patch and the projection plane, comparing the distance values stored by pixels in the coverage range one by one, replacing pixels with closer distances with values corresponding to triangular patch textures, and updating the distance values corresponding to the pixels; and (5) circularly calculating all the triangular patches to obtain a final elevation image.
The generating module 830 is configured to manually mark the building elevation image through the data marking tool, input the marked sample into the feature extraction network, perform upper and lower layer feature fusion through the FPN network, connect the Bottom-up feature enhancement structure, generate a suggestion window through the regional suggestion network, and combine the suggestion windows of each level to obtain a window mask;
specifically, a suggestion window is generated through a regional suggestion network, features of different levels of regional features are fused to obtain a feature map with a fixed size, and classification, frame positioning and mask regression are performed on the feature map.
The calculation module 840 is configured to extract a vertical column of windows according to the arrangement of the windows in the horizontal direction, and obtain a building floor height based on the average height difference of the vertical column of windows, so as to determine a building floor position.
Specifically, the number of pixels with window masks in each row and each column on a building elevation image is recorded, the pixel accumulation values of windows in the horizontal direction and the vertical direction of a building are counted, the column in which the local accumulation maximum value in the vertical direction is located is taken, the central line of the window in the vertical direction is calculated, the upper window pair and the lower window pair are determined, and the average value of the height difference of each pair of windows is used as the layer height.
Preferably, the possible distribution of the building floor positions is calculated based on the building floor height, and the distribution comprising the maximum number of windows is taken as the correct floor position.
In one embodiment of the present invention, an electronic device for intelligent building floor identification is provided, including a memory, a processor, and a computer program stored in the memory and executable by the processor, where the processor implements steps S101 to S104 in the embodiment of the present invention when executing the computer program.
There is also provided in one embodiment of the present invention a non-transitory computer readable storage medium having stored thereon a computer program which when executed by a processor is implemented to perform the building floor intelligent identification method provided by the above embodiment, the non-transitory computer readable storage medium comprising, for example: ROM/RAM, magnetic disks, optical disks, etc.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. The intelligent building floor identifying method is characterized by comprising the following steps:
analyzing the live-action three-dimensional data, obtaining a patch cluster of a single building, and calculating the minimum circumscribed rectangle of the single building on a two-dimensional plane;
respectively taking four sides of the minimum circumscribed rectangle as the width of the image, taking the height of the single building as the height of the image to construct four images, respectively projecting the vertical face of the single building onto the four images, and combining to generate a complete building vertical face image; wherein said projecting the facade of the single building onto four images respectively comprises: calculating the coverage range of each triangular patch in the three-dimensional data of the real scene of a single building in a projection plane and the distance between the triangular patch and the projection plane, comparing the distance values stored by pixels in the coverage range one by one, replacing the pixels with closer distances with the values corresponding to the textures of the triangular patches, and updating the distance values corresponding to the pixels; circularly calculating all triangular patches to obtain a final elevation image;
after manually marking a building elevation image through a data marking tool, inputting a marked sample into a feature extraction network, performing upper and lower layer feature fusion through an FPN network, connecting a Bottom-up feature enhancement structure, generating a suggestion window through a regional suggestion network, and combining suggestion windows of all levels to obtain a window mask;
and extracting the vertical columns of the windows according to the arrangement of the windows in the horizontal direction, and obtaining the building floor height based on the average height difference of the vertical columns of the windows so as to determine the building floor position.
2. The method of claim 1, wherein generating a suggestion window through the regional suggestion network, and combining the suggestion windows of each level to obtain a window mask comprises:
and generating a suggestion window through the regional suggestion network, fusing the regional features of different levels to obtain a feature map with a fixed size, and classifying the feature map, positioning a frame and performing mask regression.
3. The method according to claim 1, wherein the extracting the vertical columns of the windows according to the arrangement of the windows in the horizontal direction and obtaining the building floor height based on the average height difference of the vertical columns of the windows is specifically:
the method comprises the steps of counting pixel accumulated values of windows in the horizontal and vertical directions of a building respectively by recording the number of pixels with window masks in each row and each column on a building elevation image, taking the column in which the local accumulated maximum value in the vertical direction is located, calculating to obtain the central line of the window in the vertical direction, determining upper and lower window pairs, and taking the average value of the height difference of each pair of windows as the layer height.
4. The method of claim 1, wherein the extracting the vertical columns of windows according to the arrangement of the windows in the horizontal direction and obtaining the building floor height based on the average height difference of the vertical columns of windows further comprises:
and calculating possible distribution conditions of building floor positions based on the building floor height, and taking the distribution containing the maximum number of windows as the correct floor positions.
5. An apparatus for intelligent identification of building floors, comprising:
the analysis module is used for analyzing the live-action three-dimensional data, obtaining the patch clusters of the single building, and calculating the minimum circumscribed rectangle of the single building on the two-dimensional plane;
the projection module is used for respectively taking four edges of the minimum circumscribed rectangle as the width of the image, taking the height of the single building as the height of the image to construct four images, respectively projecting the vertical face of the single building onto the four images, and combining the four images to generate a complete building vertical face image; wherein said projecting the facade of the single building onto four images respectively comprises: calculating the coverage range of each triangular patch in the three-dimensional data of the real scene of a single building in a projection plane and the distance between the triangular patch and the projection plane, comparing the distance values stored by pixels in the coverage range one by one, replacing the pixels with closer distances with the values corresponding to the textures of the triangular patches, and updating the distance values corresponding to the pixels; circularly calculating all triangular patches to obtain a final elevation image;
the generation module is used for manually marking the building elevation image through the data marking tool, inputting the marked sample into the feature extraction network, carrying out upper and lower layer feature fusion through the FPN network, connecting a Bottom-up feature enhancement structure, generating a suggestion window through the regional suggestion network, and combining the suggestion windows of all levels to obtain a window mask;
and the calculation module is used for extracting the vertical columns of the windows according to the arrangement of the windows in the horizontal direction, and obtaining the building floor height based on the average height difference of the vertical columns of the windows so as to determine the building floor position.
6. The apparatus of claim 5, wherein generating a suggestion window through the regional suggestion network, and wherein combining the suggestion windows of each level to obtain a window mask comprises:
and generating a suggestion window through the regional suggestion network, fusing the regional features of different levels to obtain a feature map with a fixed size, and classifying the feature map, positioning a frame and performing mask regression.
7. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the building floor intelligent identification method according to any one of claims 1 to 4 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the building floor intelligent identification method according to any one of claims 1 to 4.
CN202011013179.6A 2020-09-23 2020-09-23 Intelligent building floor identification method and device Active CN112183301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011013179.6A CN112183301B (en) 2020-09-23 2020-09-23 Intelligent building floor identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011013179.6A CN112183301B (en) 2020-09-23 2020-09-23 Intelligent building floor identification method and device

Publications (2)

Publication Number Publication Date
CN112183301A CN112183301A (en) 2021-01-05
CN112183301B true CN112183301B (en) 2023-06-16

Family

ID=73956177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011013179.6A Active CN112183301B (en) 2020-09-23 2020-09-23 Intelligent building floor identification method and device

Country Status (1)

Country Link
CN (1) CN112183301B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113987622A (en) * 2021-09-26 2022-01-28 长沙泛一参数信息技术有限公司 Method for automatically acquiring building layer height parameters from shaft elevation map
CN115082850A (en) * 2022-05-23 2022-09-20 哈尔滨工业大学 Template support safety risk identification method based on computer vision
CN114943770B (en) * 2022-07-26 2022-10-04 江苏菲尔浦物联网有限公司 Visual positioning method and system based on artificial intelligence and building information
CN117036636B (en) * 2023-10-10 2024-01-23 吉奥时空信息技术股份有限公司 Texture reconstruction method for three-dimensional model of live-action building based on texture replacement

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2495688A1 (en) * 2011-03-02 2012-09-05 Harman Becker Automotive Systems GmbH Floor Number Determination in Buildings
CN106021821A (en) * 2016-07-01 2016-10-12 江苏国泰新点软件有限公司 Method and device for marking floors
CN107093205B (en) * 2017-03-15 2019-08-16 北京航空航天大学 A kind of three-dimensional space building window detection method for reconstructing based on unmanned plane image
US10354433B2 (en) * 2017-07-11 2019-07-16 Here Global B.V. Method and apparatus for generating an abstract texture for a building facade or model
CN109996053A (en) * 2019-04-10 2019-07-09 北京当红齐天国际文化发展集团有限公司 A kind of projecting method and optical projection system applied to outside vertical surface of building
CN111291799A (en) * 2020-01-21 2020-06-16 青梧桐有限责任公司 Room window classification model construction method, room window classification method and room window classification system
CN111444807B (en) * 2020-03-19 2023-09-22 北京迈格威科技有限公司 Target detection method, device, electronic equipment and computer readable medium
CN111428726B (en) * 2020-06-10 2020-09-11 中山大学 Panorama segmentation method, system, equipment and storage medium based on graph neural network

Also Published As

Publication number Publication date
CN112183301A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112183301B (en) Intelligent building floor identification method and device
CN110310320B (en) Binocular vision matching cost aggregation optimization method
CN113688807B (en) Self-adaptive defect detection method, device, recognition system and storage medium
CN109658433B (en) Image background modeling and foreground extracting method and device and electronic equipment
CN108830171B (en) Intelligent logistics warehouse guide line visual detection method based on deep learning
US11042742B1 (en) Apparatus and method for detecting road based on convolutional neural network
CN112669301B (en) High-speed rail bottom plate paint removal fault detection method
WO2022003740A1 (en) Method for determining the confidence of a disparity map through a self-adaptive learning of a neural network, and sensor system thereof
CN109145906B (en) Target object image determination method, device, equipment and storage medium
CN108629782B (en) Road target depth estimation method based on ground clue propagation
CN112396701A (en) Satellite image processing method and device, electronic equipment and computer storage medium
CN114089330A (en) Indoor mobile robot glass detection and map updating method based on depth image restoration
CN116363319B (en) Modeling method, modeling device, equipment and medium for building roof
CN109358315A (en) A kind of auxiliary mark indirect positioning methods and system
CN115880683B (en) Urban waterlogging ponding intelligent water level detection method based on deep learning
CN114445498A (en) Depth camera calibration method, system, device and medium
JP2011233060A (en) Object recognition device, object recognition method, and computer program
JP5010627B2 (en) Character recognition device and character recognition method
CN114219871A (en) Obstacle detection method and device based on depth image and mobile robot
CN111402185A (en) Image detection method and device
CN114387293A (en) Road edge detection method and device, electronic equipment and vehicle
JP2021182243A (en) Image determination device, method, and program
KR100954137B1 (en) Edge-based text localization and segmentation algorithms for automatic slab information recognition
CN110598697A (en) Container number positioning method based on thickness character positioning
CN114372994B (en) Method for generating background image in video concentration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant