CN114066768A - Building facade image restoration method, device, equipment and storage medium - Google Patents

Building facade image restoration method, device, equipment and storage medium Download PDF

Info

Publication number
CN114066768A
CN114066768A CN202111405222.8A CN202111405222A CN114066768A CN 114066768 A CN114066768 A CN 114066768A CN 202111405222 A CN202111405222 A CN 202111405222A CN 114066768 A CN114066768 A CN 114066768A
Authority
CN
China
Prior art keywords
facade
image
window
building
building facade
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111405222.8A
Other languages
Chinese (zh)
Inventor
张银松
赵峻弘
徐徐升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhai Dashi Intelligence Technology Co ltd
Original Assignee
Wuhai Dashi Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhai Dashi Intelligence Technology Co ltd filed Critical Wuhai Dashi Intelligence Technology Co ltd
Priority to CN202111405222.8A priority Critical patent/CN114066768A/en
Publication of CN114066768A publication Critical patent/CN114066768A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a building facade image restoration method, device, equipment and storage medium, and relates to the field of image processing. The method comprises the steps of acquiring a first building facade image and a facade window data set in the first building facade image by adopting real three-dimensional data, positioning a facade window in the first building facade image according to the facade window data set, generating a mask image of the facade window, repairing the mask image of the facade window, and acquiring the repaired mask image. And repairing the first building facade image based on the repaired mask image by adopting a repairing generation model to obtain a repaired second building facade image. The whole repairing process can be automatically carried out by program driving, so that manpower and material resources are saved, and the later repairing strength of the three-dimensional model is reduced. In addition, the method is realized based on the facade image of the original building, so that the reality of the original image can be better kept.

Description

Building facade image restoration method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of computer images, in particular to a building facade image restoration method, device, equipment and storage medium.
Background
With the continuous deepening of smart cities, the real-scene three-dimensional model of the city is rapidly generated, and the method becomes an important means for acquiring ground three-dimensional information. However, because the model obtained by modeling through the existing automatic modeling software is modeled according to the three-dimensional modeling principle, some situations exist that accurate matching and modeling cannot be performed, or some reflective surfaces of a scene cannot be matched, and the problems that the scene is flawed and not real enough exist.
The problems can be solved by means of late-stage software repair, but the model repair software needs to be repaired or replaced by manpower, and the cost of manpower and time is huge if large-area and large-scale model repair work is carried out.
Disclosure of Invention
The present invention aims to provide a building facade image restoration method, device, equipment and storage medium for overcoming the defects in the prior art, so as to realize the quick restoration of the building facade image.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a building facade image restoration method, including:
acquiring a first building facade image and a facade window data set in the first building facade image by using live-action three-dimensional data, wherein the facade window data set comprises a label of a facade window in the first building facade image;
positioning a facade window in the first building facade image according to the facade window data set to generate a mask image of the facade window;
repairing the mask image of the facade window to obtain a repaired mask image;
and repairing the first building facade image based on the repaired mask image by adopting a repair generation model to obtain a repaired second building facade image.
Optionally, the positioning a facade window in the first building facade image according to the facade window data set to generate a mask image of the facade window includes:
according to the facade window data set, carrying out feature extraction on the facade window in the first building facade image to generate a candidate region of the facade window;
and detecting the frame coordinates of the facade window according to the candidate area of the facade window, and generating a mask image of the facade window.
Optionally, the performing feature extraction on the facade window in the first building facade image according to the facade window data set to generate the candidate area of the facade window further includes:
performing multi-layer feature extraction on the facade window in the first building facade image based on an extraction network according to the facade window data set to generate a multi-layer first building facade image and a multi-layer facade window candidate area; wherein the extraction network trains acquisition according to a first sample dataset comprising: building facade images marked with window labels;
the detecting the vertical face window frame coordinates and generating the mask image of the vertical face window comprise:
performing pooling treatment on the multilayer first building facade image, adjusting the size of the multilayer first building facade image, and obtaining the adjusted multilayer first building facade image;
detecting window frame coordinates by adopting a frame detection algorithm based on the adjusted multilayer first building facade image and the multilayer facade window candidate area;
and generating a mask image of the facade window according to the window frame coordinates.
Optionally, after performing feature extraction on the facade window in the first building facade image according to the facade window data set to generate the candidate region of the facade window, the method further includes:
and performing redundant suppression on the overlapped facade window candidate region based on a non-maximum suppression algorithm.
Optionally, the repair generation model generates a countermeasure network, and the generating the countermeasure network includes: a generator and at least one discriminator; the method further comprises the following steps:
mapping the generator sample mask image to a real building facade image to obtain a composite image, wherein the sample mask image is a mask image containing a standard facade window;
and distinguishing the synthetic image and the real building facade image through the discriminator so as to train and obtain the generated countermeasure network.
Optionally, the distinguishing the composite image and the real building facade image by the discriminator to train and obtain the generated countermeasure network includes:
respectively carrying out multi-layer down-sampling on the synthetic image and the real building facade image to obtain a synthetic image of a multi-layer facade window and a real building facade image of the multi-layer facade window;
and distinguishing the synthetic image of the multi-layer facade window and the real building facade image of the multi-layer facade window through a plurality of discriminators respectively so as to train and obtain the generated countermeasure network.
Optionally, the obtaining a first building elevation image and an elevation window data set in the first building elevation image by using the live-action three-dimensional data includes:
acquiring a patch cluster of each building contained in the live-action three-dimensional data according to the live-action three-dimensional data;
identifying and obtaining the outer contour of each building according to the patch cluster of the building;
determining the number of facades of each building according to the outer contour, and extracting a grid surface of each facade of each building according to a preset constraint condition to obtain a facade image of each facade, wherein the facade image comprises the first building facade image;
and processing the first building facade image to obtain a facade window data set in the first building facade image.
In a second aspect, an embodiment of the present application further provides a building facade repair apparatus, including: the device comprises a processing module, a generating module, a mask repairing module and a repairing generating module;
the processing module is used for acquiring a first building facade image and a facade window data set in the first building facade image by adopting live-action three-dimensional data, wherein the facade window data set comprises a label of a facade window in the first building facade image;
the generating module is used for positioning a facade window in the first building facade image according to the facade window data set and generating a mask image of the facade window;
the mask repairing module is used for repairing the mask image of the facade window to obtain a repaired mask image;
and the restoration generating module is used for restoring the first building facade image based on the restored mask image by adopting a restoration generating model to obtain a restored second building facade image.
In a third aspect, an embodiment of the present application further provides an electronic device, including: the building facade image restoration method comprises a processor, a storage medium and a bus, wherein the storage medium stores program instructions executable by the processor, when the electronic equipment runs, the processor and the storage medium are communicated through the bus, and the processor executes the program instructions to execute the steps of the building facade image restoration method according to any one of the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is executed by a processor to perform the steps of the building facade image restoration method according to any one of the first aspect.
The beneficial effect of this application is: the embodiment of the application provides a building facade image restoration method, which comprises the steps of adopting real-scene three-dimensional data to obtain a first building facade image and a facade window data set in the first building facade image, then positioning a facade window in the first building facade image according to the facade window data set, generating a mask image of a facade window, restoring the mask image of the facade window, and obtaining a restored mask image. And repairing the first building facade image based on the repaired mask image by adopting a repairing generation model to obtain a repaired second building facade image. The whole repairing process can be automatically carried out by program driving, so that manpower and material resources are saved, and the later repairing strength of the three-dimensional model is reduced. In addition, the method is realized based on the facade image of the original building, so that the reality of the original image can be better kept.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a building facade image restoration method according to an embodiment of the present application;
fig. 2 is a flowchart of a building facade image restoration method according to yet another embodiment of the present application;
FIG. 3 is a schematic diagram of a work flow of a mask image generation method for a conventional facade window;
fig. 4 is a schematic view illustrating a window area processing workflow in a building facade image restoration method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a feature enhancement structure of a building facade image restoration method in an embodiment of the present application;
fig. 6 is a flowchart of a building facade image restoration method according to another embodiment of the present application;
fig. 7 is a flowchart of a building facade image restoration method according to yet another embodiment of the present application;
fig. 8 is a diagram of a framework for generating an countermeasure network in a building facade image restoration method according to an embodiment of the present application;
fig. 9 is a flowchart of a building facade image restoration method according to yet another embodiment of the present application;
fig. 10 is a flowchart of a building facade image restoration method according to a further embodiment of the present application;
fig. 11 is a comprehensive flowchart of a building facade image restoration method according to an embodiment of the present application;
fig. 12 is a schematic view of a building facade image restoration device according to an embodiment of the present application;
fig. 13 is a schematic view of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
In this application, unless explicitly stated or limited otherwise, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one feature. In the description of the present invention, "a plurality" means at least two, for example, two, three, unless specifically defined otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
At present, ground three-dimensional information is acquired in various ways, and a city three-dimensional model is established, for example, with the rapid development of an unmanned aerial vehicle oblique photography technology, an unmanned aerial vehicle can be used for carrying a multi-view mirror to carry out oblique photography at present, and then a city real scene three-dimensional model is generated. After the ground three-dimensional information is acquired in various ways, automatic modeling software, such as Smart3D, photoscan, Pix4D and the like, is usually used for modeling the obtained real three-dimensional model, but the real three-dimensional model obtained by modeling is defective and not real enough due to the problems of two aspects, namely, the principle factor and the subjectivity factor, in the modeling. The principle factor refers to the situation that the existing photogrammetry modeling software cannot be accurately matched and modeled according to the principle of three-dimensional modeling because the model is established by extracting and matching image feature points.
In particular, in practical applications, when a light reflecting surface (such as glass), a hollow, and the like exist in a real scene targeted by modeling, especially in a building window area, and the data is modeled due to light and shadow conditions, data acquisition equipment, modeling software, and the like, such an area usually shows that a model building has double images, streaks, and light and shadow mottle, especially in an area with geometric changes such as a window. Although such problems can be solved by means of later software (e.g. 3d max, geoimagics, meshimixer, and smart software modules) repair, the current common repair method is to replace the problematic facade with a better image prepared in advance. However, these model repairing software are mainly repaired or replaced by manpower, and if a large-area and large-scale model repairing work is performed, the cost of manpower and time is enormous, and the replaced facade may lack the sense of reality. Therefore, how to use some intelligent algorithms to replace human power becomes the key of model repair. The method is used for researching the repairing problem of the model flaws caused by the reflective surface, the hollowed-out object and the like, and providing the building facade image repairing method starting from the repairing of the window aiming at the problems of window decoration, deformation and the like in the model.
Aiming at building facade image restoration, the embodiment of the application provides multiple possible implementation modes so as to realize the rapid restoration of the building facade image. The following is explained by way of a number of examples in connection with the drawings. Fig. 1 is a flowchart of a building facade image restoration method according to an embodiment of the present application, where the building facade image restoration method can be implemented by an electronic device running a building facade image restoration program, and the electronic device may be, for example, a terminal device or a server. As shown in fig. 1, the method includes:
step 101: and acquiring a first building facade image and a facade window data set in the first building facade image by adopting the live-action three-dimensional data, wherein the facade window data set comprises a label of a facade window in the first building facade image.
In one possible implementation, live-action three-dimensional data may be acquired using oblique photography techniques. The oblique photography technology is a new technology gradually developed in the field of aerial survey in recent years, complex scenes are comprehensively sensed in a large-range, high-precision and high-definition mode, and the attributes of the appearances, positions, heights and the like of ground objects are intuitively reflected through data achievements generated by efficient data acquisition equipment and professional data processing procedures, so that the real effect and the surveying and mapping precision are guaranteed. Therefore, the live-action three-dimensional data can comprise: one or more of position data, appearance data, height data, texture data, etc. The foregoing is merely an example, and in an actual implementation, the live-action three-dimensional data may also be obtained in other manners, which is not limited in this application.
After the live-action three-dimensional data is acquired, a plurality of buildings may be included in the data, each building includes a plurality of facade images, and a first building facade image is acquired from the plurality of buildings, and the first building facade image may be any facade image of any building in the live-action three-dimensional data. In addition, according to the live-action three-dimensional data, a facade window data set in the first building facade image is obtained, the data set comprises a label of a facade window in the first building facade image, that is, the facade window data set is data representing the facade window in the first building facade image, and the data can be in a data form or an image form, which is not limited in the application. The method for acquiring the facade window data set may be, for example, acquisition by an algorithm such as artificial intelligence or a neural network, but the present application is not limited thereto, and the facade window data set in the first building facade image may be acquired.
Step 102: and positioning a facade window in the first building facade image according to the facade window data set to generate a mask image of the facade window.
It should be noted that a pre-trained facade window extraction algorithm may be used to obtain a facade window data set, and specifically, the method for training the facade window extraction algorithm may include:
and acquiring a building facade image template and a facade window template data set in the building facade image template by adopting the live-action three-dimensional data, wherein the facade window template data set comprises a label of a facade window in the building facade image template. The label of the facade window can be marked by a worker so as to ensure the accuracy.
The facade window extraction algorithm is used for learning and training according to the facade image template of the building and the facade window template data set, and the trained facade window extraction algorithm can realize the partition and positioning of the inner windows in the facade image of the first building.
And positioning the facade window in the first building facade image according to a pre-trained facade window extraction algorithm to generate a mask image of the facade window. The mask image is an image for the purpose of controlling an image processing area or a processing process by blocking all or part of a selected image, figure, or object in a processed image. In a possible implementation manner, a mask image of a facade window may be obtained by setting a facade window extracted from a first building facade image to 1 and setting a portion of the first building facade image other than the facade window to 0 according to a pre-trained facade window extraction algorithm, and according to this setting manner, the obtained mask image is expressed as: the facade window in the first building facade image is white, and the part outside the facade window in the first building facade image is black. The above is merely an example, and in practical implementation, the specific representation manner or design manner of the mask image is not limited in the present application, and the use requirement of the user can be met.
Step 103: and repairing the mask image of the facade window to obtain the repaired mask image.
In a possible implementation manner, the mask image of the facade window may have the problems of irregular shape of the facade window, missing of the facade window and the like. Because of the practicability of the facade window and the aesthetic requirement of the facade, the facade window has high structural regularity, and according to the characteristic, the mask image of the facade window is repaired and supplemented. For example, in a vertical window having an irregular shape due to a problem such as light influence or window breakage, the mask image of the portion corresponding to the vertical window may be adjusted in shape based on information such as the outline of the vertical window and the outline of the vertical window adjacent to the vertical window, so that the mask image pattern corresponding to the vertical window after adjustment is regular and coordinated with the overall mask image size. For another example, for a vertical window which is not mentioned, the corresponding completion can be performed through the position structure relationship between the windows. The above is merely an example, and in practical implementation, there may be other requirements and methods for repairing the mask image, which are not limited in the present application. And after the mask image of the opposite window is repaired, obtaining the repaired mask image. Therefore, even if the window is missed, the window can be automatically repaired through the structure between the windows, and even the window with serious damage can be repaired.
In a specific implementation manner, repairing the mask image of the facade window may be performing regularization repair on the mask image, for example, for a conventional rectangular window, extracting a minimum external rectangle of each window, and then performing local completion according to an arrangement rule inside the regularized mask image, specifically, since the floor height of most floors is certain and the sizes of windows are consistent in reality, the local completion process may be: the method comprises the steps of roughly calculating the layer height of each layer through windows extracted from each column, then calculating the position of an unextracted window according to the layer height, and copying window masks extracted from the upper layer and the lower layer to the calculated position according to the characteristic that the size and the shape of the window in each column are generally consistent, so that completion is achieved.
Step 104: and repairing the first building facade image based on the repaired mask image by adopting a repairing generation model to obtain a repaired second building facade image.
The restoration generation model restores the first building facade image based on the restored mask image. In a possible implementation manner, the restoration generating model may adjust each facade window according to changes in the shape, position, and the like of each facade window in the mask image and the restored mask image, so as to realize restoration of the first building facade image and obtain the restored second building facade image. The above is merely an example, and in practical implementation, there may be other repair manners for repairing the generated model, which is not limited in this application.
In summary, the embodiment of the present application provides a building facade image restoration method, which includes obtaining a first building facade image and a facade window data set in the first building facade image by using live-action three-dimensional data, then positioning a facade window in the first building facade image according to the facade window data set, generating a mask image of a facade window, restoring the mask image of the facade window, and obtaining a restored mask image. And repairing the first building facade image based on the repaired mask image by adopting a repairing generation model to obtain a repaired second building facade image. Firstly, the method and the device automatically extract the facade window in the first building facade image through a pre-trained facade window extraction algorithm, automatically repair the extracted mask image, and then repair the first building facade image through a repair generation model, wherein the whole repair process can be automatically performed by program driving, so that manpower and material resources are saved, and the later repair strength of the three-dimensional model is reduced. Secondly, the restoration of the image is realized based on the facade image of the original building, and the authenticity of the original image is reserved.
Optionally, on the basis of fig. 1, the present application further provides a possible implementation manner of generating a facade window mask image in a building facade image restoration method, and fig. 2 is a flowchart of generating a facade window mask image in a building facade image restoration method according to yet another embodiment of the present application; as shown in fig. 2, when a facade window in the first building facade image is located according to the facade window data set, and a mask image of the facade window is generated, the method includes:
step 201: and according to the facade window data set, carrying out feature extraction on the facade window in the first building facade image to generate a facade window candidate region.
In a possible implementation manner, a facade window in the first building facade image is extracted from the first building facade image through a characteristic extraction manner according to the facade window data set, and after the extraction is completed, a facade window candidate region is generated according to the region of the extracted facade window. It should be noted that the candidate area of the facade window may include at least one complete facade window, or may include a part of the facade window, and according to the specific extraction accuracy of the facade window extraction algorithm, there may be a difference in the obtained extraction result, and the user may adjust the accuracy by training or parameter adjustment, which is not limited in this application.
In a specific implementation, a Region suggestion Network (RPN) may be used to perform feature extraction on a facade window in the first building facade image, so as to generate a facade window candidate Region.
Step 202: and detecting the frame coordinates of the facade window according to the candidate area of the facade window to generate a mask image of the facade window.
And after generating the candidate region of the facade window, detecting the frame coordinates of the facade window, and generating a mask image of the facade window based on the detected frame coordinates of the facade window.
In a specific implementation manner, generation of a candidate Region of a facade window and Mask generation for window positioning can be realized based on Mask-RCNN, fig. 3 is a schematic diagram of a work flow of a Mask image generation method for a conventional facade window, as shown in fig. 3, a Region suggestion Network (RPN) is used to perform feature extraction on a facade window in a first building facade image to generate a candidate Region of the facade window, the candidate Region is classified by a Softmax classifier, coordinates of a detection border are obtained by a multi-task loss border regression algorithm, and finally a Mask (Mask) is generated by using a full convolution segmentation Network (FCN).
In another possible implementation manner, in order to further enhance the feature expression capability and thus improve the recognition accuracy, the present application is further improved on the basis of the basic Mask-RCNN, fig. 4 is a schematic diagram of a window area processing workflow in a building facade image restoration method provided by an embodiment of the present application, as shown in fig. 4, on the basis of fig. 3, in order to enhance the feature expression capability and thus improve the recognition accuracy, a Bottom-up Path Augmentation structure is introduced, then the features of each layer are fused, and coordinates of a frame are detected by using the fused feature positioning. Although the Feature Pyramid Network (FPN) in the original Mask-RCNN network already considers multiple layers of information, the positioning capability is not very good because the information propagation capability of the bottom layer is not strong. The added feature enhancement branch of the patent can make full use of the bottom layer information. Fig. 5 is a schematic view of a feature enhancement structure of a building facade image restoration method in an embodiment of the present application, as shown in fig. 5, Ni in the diagram is fused with Pi +1, and Ni is added to Pi +1 (element-wise add) to obtain Ni + 1.
According to the candidate region of the facade window, feature extraction is carried out on the facade window in the first building facade image to generate a candidate region of the facade window, and then the frame coordinates of the facade window are detected according to the candidate region of the facade window to generate a mask image of the facade window.
Optionally, on the basis of fig. 2, the present application further provides a possible implementation manner of a building facade image restoration method, and fig. 6 is a flowchart of a building facade image restoration method provided in another embodiment of the present application; as shown in fig. 6, when feature extraction is performed on a facade window in a first building facade image according to a facade window data set to generate a facade window candidate region, the method includes:
step 601: according to the facade window data set, multi-layer feature extraction is carried out on the facade windows in the first building facade image on the basis of an extraction network, and multi-layer first building facade images and multi-layer facade window candidate areas are generated; wherein, the extraction network trains and acquires according to a first sample data set, and the first sample data set comprises: an image of a facade marked with a window label.
In a possible implementation manner, the construction process of the extraction network may be: firstly, a building facade image template is taken as a sample, a facade window in the template image is marked manually, a window label is added to the facade window to obtain a first sample data set, the first sample data set is used for training a network, the trained network can detect the facade window in the first building facade image, further, the trained network can also use a detection frame to realize the partition positioning of the facade window in the first building facade image, and a facade window candidate area is generated.
In practical use, because the building facade images are different in size and have larger sizes, the building facade images cannot be directly input into a facade window extraction network for training, firstly, the building facade windows need to be cut, then window labels are added to the facade windows of the cut images for labeling, data augmentation is performed through translation, rotation, telescopic transformation and other modes, and the diversity of training data is enhanced. It should be noted that, in the training, the hyper-parameters in the algorithm, such as the number of iterations, the learning rate, etc., may be fine-tuned, for example, by referring to the variation of the loss function, and when the loss function converges or approximately converges, the training may be completed. After the extraction network training is completed, whether the extraction network algorithm is over-fit or under-fit can be analyzed according to the precision curves on the obtained training set and the verification set, and then the network is called to obtain a test result.
In a specific implementation manner, a deep learning method may be adopted to train the extraction network in advance, and the specific process of construction may be as follows: firstly, an original building facade image template is collected, a facade window in the template image is marked manually, a facade window area is drawn by using a detection frame, a window label is added to the facade window to obtain a first sample data set, then a convolutional neural network is constructed, a neural network model is trained by using the first sample data set, the trained neural network model can detect the facade window in the first building facade image, the detection frame is used for realizing the partition positioning of the facade window, and a facade window candidate area is generated.
The foregoing is merely an example, and in an actual implementation, there may be other forms of extracting a network algorithm, which is not limited in this application, and the partition positioning of the opposite window may be implemented.
After the extraction network is trained, performing multi-layer feature extraction on a facade window in the first building facade image, namely performing multi-layer sampling on the input first building facade image to generate multi-layer first building facade images, and performing feature extraction on the building facade image of each layer to obtain a multi-layer facade window candidate area.
Detecting the frame coordinates of the facade window and generating a mask image of the facade window, wherein the steps comprise:
step 602: and performing pooling treatment on the multi-layer first building facade image, adjusting the size of the multi-layer first building facade image, and acquiring the adjusted multi-layer first building facade image.
In a possible implementation manner, since the size and the dimension of the input building facade image are uncertain, and the output floor needs to be fixed according to the needs of a user, the first building facade image size of each floor needs to be adjusted by performing pooling processing on the multiple floors of building facade images, so as to obtain the adjusted multiple floors of first building facade images. The fixed size required by the output layer is not related to the size of the building facade image originally input, but is only related to the setting of the user, and the user can set the size of the image output by the output layer according to the actual requirement.
In a specific implementation mode, the ROIAlign pooling can be used in the Mask-RCNN to realize the pooling processing of the multi-layer first building facade image. The above is merely an example, and other pooling modes may be used in practical implementation, which is not limited in the present application.
Step 603: and detecting the coordinates of the window frame by adopting a frame detection algorithm based on the adjusted multilayer first building facade image and the multilayer facade window candidate area.
In a possible implementation manner, based on the adjusted multiple layers of first building facade images, for the facade window candidate region of each layer of first building facade images, a frame detection algorithm is used to detect window frame coordinates, for example, the frame detection algorithm may be a multitask lost frame regression algorithm in Mask-RCNN, and the like, which is not limited in this application.
Step 604: and generating a mask image of the facade window according to the window frame coordinates.
And after the coordinates of the window frame are obtained, the mask image of the vertical face window can be generated.
By extracting multilayer features of the facade window in the first building facade image and performing subsequent processing based on the multilayer first building facade image and the multilayer facade window candidate area, compared with a single-layer processing method, fusion of deep-layer information and shallow-layer information is achieved, and facade window information contained in the generated mask image of the facade window is more complete.
Optionally, on the basis of fig. 2, the present application further provides a possible implementation manner of a building facade image restoration method, where according to a facade window data set, feature extraction is performed on a facade window in a first building facade image, and after a facade window candidate region is generated, the method includes:
and carrying out redundant suppression on the overlapped facade window candidate area based on a non-maximum suppression algorithm.
In a possible implementation manner, when the facade window data set is used to perform feature extraction on the facade window in the first building facade image, the generated candidate regions of the facade window may have an overlapping or even highly overlapping phenomenon, and a Non-Maximum Suppression (NMS) algorithm may be used to perform redundant Suppression on the overlapping candidate regions of the facade window, that is, a best candidate region is selected from the overlapping candidate regions of the facade window.
By using a non-maximum suppression algorithm, redundant suppression is performed on the overlapped facade window candidate region, interference of the overlapped facade window candidate region on subsequent processing is eliminated, and the robustness of the restoration method is further enhanced.
Optionally, on the basis of fig. 1, the repairing generation model generates a countermeasure network, and the generating the countermeasure network includes: a generator and at least one discriminator; fig. 7 is a flowchart of a method for repairing a building facade image according to yet another embodiment of the present application; as shown in fig. 7, the method includes:
step 701: and mapping the generator sample mask image to the real building facade image to obtain a composite image, wherein the sample mask image is a mask image containing a standard facade window.
The repair generation model generates a countermeasure network (GAN), and the generating the countermeasure network includes: a Generator (Generator, G) and at least one Discriminator (D), the Generator and the Discriminator being used for image-to-image translation. And the generator maps the sample mask image to the real building facade image to obtain a composite image. The sample mask image is a mask image containing a standard facade window, which is made according to the real building facade image, namely the mask image of the standard facade window corresponding to the real building facade image, in other words, the mask image of the standard facade window and the real building facade image are synthesized, and the obtained synthesized image has no difference with the target processed real building facade image or the difference is within a certain range allowed by a user.
Step 702: and distinguishing the synthetic image and the real building facade image through a discriminator so as to train, acquire and generate the confrontation network.
And the discriminator discriminates the synthetic image generated by the generator from the real building facade image, distinguishes the synthetic image from the real building facade image, repeats the steps and trains the generation countermeasure network.
In a specific implementation, the training for generating the countermeasure network required by the present application can be realized by generating Pix2PixHD (Pix to Pix of high definition) in a countermeasure network (GAN). Fig. 8 is a diagram of a framework of a countermeasure network generated in a building facade image restoration method according to an embodiment of the present application, and as shown in fig. 8, the generator G of the Pix2PixHD may be composed of a global generation network G1 and/or a local generation network G2, where the output size of the global generation network is 1024 × 512 pixels, and the output size of the local generation network is 2048 × 1024 pixels. The discriminator is designed according to multiple scales, a three-layer image pyramid is constructed based on a real image and a synthesized image, and the image of each layer is identified, wherein the three layers of discriminators are the same, but the input sizes of the three layers of discriminators are different.
In another specific implementation mode, in order to reduce the amount of calculation and enable a general machine to meet the video memory requirement of training, the method is improved based on a Pix2PixHD basic framework, namely only a global generator of the Pix2PixHD is selected as a generator of the method, a multi-level discriminator in the Pix2PixHD is still adopted as a discriminator, and after the global generator is subjected to a series of convolution operations, obtained high-level features lose some low-level information such as target positions and the like, so that positioning is inaccurate.
Mapping the generator sample mask image to the real building facade image to obtain a synthetic image, and distinguishing the synthetic image and the real building facade image through a discriminator to train and obtain the generated countermeasure network. The trained generation confrontation network is utilized to realize the restoration of the building facade image, the mode is flexible, and the labor consumption is reduced.
Optionally, on the basis of fig. 7, the present application further provides a possible implementation manner of a building facade image restoration method, and fig. 9 is a flowchart of a building facade image restoration method according to yet another embodiment of the present application; as shown in fig. 9, when the composite image and the real building facade image are discriminantly distinguished by the discriminator to train and acquire the generation of the countermeasure network, the method includes:
step 901: and respectively carrying out multi-layer down-sampling on the synthetic image and the real building facade image to obtain the synthetic image of the multi-layer facade window and the real building facade image of the multi-layer facade window.
Referring to fig. 8, when the generation of the countermeasure network is trained, the synthetic image and the real building facade image are respectively subjected to multi-layer down-sampling, and it should be noted that the down-sampling rate of the synthetic image is required to be the same as the down-sampling rate of the real building facade image, so that the size of each layer of synthetic image in the synthetic image of the obtained multi-layer facade window is the same as the size of the real building facade image of the facade window of the corresponding layer.
Step 902: and distinguishing the synthetic image of the multi-layer vertical face window and the real building vertical face image of the multi-layer vertical face window through a plurality of discriminators respectively so as to train, acquire and generate the countermeasure network.
The generated countermeasure network is trained by using the synthetic images with different sizes and the real building facade image of the facade window, so that the network can better realize image synthesis, and image restoration is better realized.
Optionally, on the basis of fig. 1, the present application further provides a possible implementation manner of a building facade image restoration method, and fig. 10 is a flowchart of a building facade image restoration method provided in yet another embodiment of the present application; as shown in fig. 10, when acquiring a first building facade image and a pre-trained facade window extraction algorithm in the first building facade image by using live-action three-dimensional data, the method includes:
step 1001: and acquiring a patch cluster of each building contained in the live-action three-dimensional data according to the live-action three-dimensional data.
In one possible implementation, the live-action three-dimensional data is composed of a three-dimensional patch model and a texture image. And acquiring a patch cluster of each building in a plurality of buildings contained in the live-action three-dimensional data. The building patch cluster is a set classification (clustering) of live-action three-dimensional data, wherein clusters generated by clustering of building patches are included, namely, a set of building patch data is the building patch cluster.
Step 1002: and identifying and obtaining the outer contour of each building according to the patch cluster of the building.
In a possible implementation mode, after a patch cluster including a building is obtained, a two-dimensional outline boundary of the building is obtained by calculating a projection interface of the cluster in an X-Y two-dimensional plane, and then the obtained boundary of the building is regularized, so that the outer contour of the building is obtained.
Step 1003: and determining the number of the vertical faces of each building according to the outer contour, and extracting the grid surface of each vertical face of each building according to a preset constraint condition to obtain a vertical face image of each vertical face, wherein the vertical face image comprises a first building vertical face image.
In a possible implementation mode, after the outer contour of each building is obtained, the approximate position of the vertical face of the building and the number of the vertical faces can be quickly reached according to the outer contour, so that the problem of dimension contradiction when a seed growth extraction plane is directly carried out in the vertical face is avoided.
Then, extracting the grid surface of each facade of the building according to the following constraint conditions by using each segment of line segment of the outer contour:
1) the distance from the plane of the grid surface to the line segment does not exceed a threshold value T1;
2) an included angle (radian) between the normal vector of the grid surface and the XOY plane is smaller than a threshold value T2;
3) the included angle between the normal vector of the grid surface and the direction vector of the line segment is smaller than a threshold value T3.
It should be noted that, in the constraint conditions, T1, T2, and T3 may be set according to user needs, or may be set according to experimental results, and the like, which is not limited in the present application; in addition, the foregoing is only a possible implementation example of the preset constraint condition, and in an actual implementation, other constraint conditions may exist, which is not limited in this application.
And extracting the grid surface meeting the preset constraint condition into the vertical surface corresponding to the segment. And repeating the steps until each vertical face is extracted, and obtaining a vertical face image of each vertical face. It should be noted that the method can obtain the facade image of each facade of all buildings in the live-action three-dimensional data, including the first building facade image.
Step 1004: and processing the first building facade image to obtain a facade window data set in the first building facade image.
In a possible implementation manner, a first building facade image in the obtained facade images of each facade of all buildings is processed, and a facade window data set in the first building facade image is further obtained.
In summary, fig. 11 is a comprehensive flowchart of the building facade image restoration method according to an embodiment of the present application, and as shown in fig. 11, fig. 11 provides a possible technical implementation flow of the building facade image restoration method according to the present application, and specific implementation processes and technical effects thereof are described above and are not described herein again.
The following describes a building facade image restoration device, an electronic device, a computer readable storage medium, and the like for implementing the method provided by the present application, and specific implementation processes and technical effects thereof are referred to above and will not be described again below.
The embodiment of the application provides a possible implementation example of a building facade image restoration device, and the building facade image restoration method provided by the embodiment can be executed. Fig. 12 is a schematic view of a building facade image restoration device according to an embodiment of the present application. As shown in fig. 12, the building facade image restoration apparatus 100 includes: a processing module 121, a generating module 123, a mask repairing module 125, and a repairing generating module 127;
the processing module 121 is configured to acquire a first building facade image and a facade window data set in the first building facade image by using live-action three-dimensional data, where the facade window data set includes a label of a facade window in the first building facade image;
the generating module 123 is configured to position a facade window in the first building facade image according to the facade window data set, and generate a mask image of the facade window;
the mask repairing module 125 is configured to repair a mask image of the facade window to obtain a repaired mask image;
and the repairing generation module 127 is configured to repair the first building facade image based on the repaired mask image by using a repairing generation model, and acquire a repaired second building facade image.
Optionally, the generating module 123 is specifically configured to perform feature extraction on a facade window in the first building facade image according to the facade window data set, so as to generate a candidate area of the facade window; and detecting the frame coordinates of the facade window according to the candidate area of the facade window to generate a mask image of the facade window.
Optionally, the generating module 123 is specifically configured to perform multi-layer feature extraction on a facade window in the first building facade image based on an extraction network according to the facade window data set, so as to generate a multi-layer first building facade image and a multi-layer facade window candidate region; wherein, the extraction network trains and acquires according to a first sample data set, and the first sample data set comprises: building facade images marked with window labels; performing pooling treatment on the multi-layer first building facade image, adjusting the size of the multi-layer first building facade image, and obtaining the adjusted multi-layer first building facade image; detecting window frame coordinates by adopting a frame detection algorithm based on the adjusted multilayer first building facade image and multilayer facade window candidate areas; and generating a mask image of the facade window according to the window frame coordinates.
Optionally, the generating module 123 is specifically configured to perform redundant suppression on the overlapped candidate regions of the facade window based on a non-maximum suppression algorithm.
Optionally, the restoration generating module 127 is specifically configured to map the generator sample mask image into the real building facade image to obtain a composite image, where the sample mask image is a mask image including a standard facade window; and distinguishing the synthetic image and the real building facade image through a discriminator so as to train, acquire and generate the confrontation network.
Optionally, the restoration generating module 127 is specifically configured to perform multi-layer down-sampling on the synthesized image and the real building facade image respectively to obtain a synthesized image of a multi-layer facade window and a real building facade image of the multi-layer facade window; and distinguishing the synthetic image of the multi-layer vertical face window and the real building vertical face image of the multi-layer vertical face window through a plurality of discriminators respectively so as to train, acquire and generate the countermeasure network.
Optionally, the processing module 121 is specifically configured to obtain, according to the live-action three-dimensional data, patch cluster clusters of each building included in the live-action three-dimensional data; identifying and acquiring the outline of each building according to the patch cluster of the building; determining the number of facades of each building according to the outer contour, and extracting a grid surface of each facade of each building according to a preset constraint condition to obtain a facade image of each facade, wherein the facade image comprises a first building facade image; and processing the first building facade image to obtain a facade window data set in the first building facade image.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
The embodiment of the application provides a possible implementation example of an electronic device, which can execute the building facade image restoration method provided by the embodiment. Fig. 13 is a schematic diagram of an electronic device according to an embodiment of the present application, where the electronic device may be integrated in a terminal device or a chip of the terminal device, and the terminal may be a computing device with a data processing function.
The electronic device includes: the building facade image restoration method comprises a processor 1301, a storage medium 1302 and a bus, wherein the storage medium stores program instructions executable by the processor, when the control device runs, the processor and the storage medium communicate through the bus, and the processor executes the program instructions to execute the steps of the building facade image restoration method. The specific implementation and technical effects are similar, and are not described herein again.
The embodiment of the present application provides a possible implementation example of a computer-readable storage medium, which is capable of executing the building facade image restoration method provided in the foregoing embodiment, where the storage medium stores a computer program, and the computer program is executed by a processor to execute the steps of the building facade image restoration method.
A computer program stored in a storage medium may include instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (which may be a processor) to perform some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A building facade image restoration method is characterized by comprising the following steps:
acquiring a first building facade image and a facade window data set in the first building facade image by using live-action three-dimensional data, wherein the facade window data set comprises a label of a facade window in the first building facade image;
positioning a facade window in the first building facade image according to the facade window data set to generate a mask image of the facade window;
repairing the mask image of the facade window to obtain a repaired mask image;
and repairing the first building facade image based on the repaired mask image by adopting a repair generation model to obtain a repaired second building facade image.
2. The method of claim 1, wherein said locating a facade window in said first building facade image from said facade window data set, generating a mask image for said facade window, comprises:
according to the facade window data set, carrying out feature extraction on the facade window in the first building facade image to generate a candidate region of the facade window;
and detecting the frame coordinates of the facade window according to the candidate area of the facade window, and generating a mask image of the facade window.
3. The method of claim 2, wherein said generating the facade window candidate region by feature extraction of the facade windows in the first building facade image from the facade window dataset further comprises:
according to the facade window data set, multi-layer feature extraction is carried out on the facade window in the first building facade image based on an extraction network, and a multi-layer first building facade image and a multi-layer facade window candidate area are generated; wherein the extraction network trains acquisition according to a first sample dataset comprising: building facade images marked with window labels;
the detecting the vertical face window frame coordinates and generating the mask image of the vertical face window comprise:
performing pooling treatment on the multilayer first building facade image, adjusting the size of the multilayer first building facade image, and obtaining the adjusted multilayer first building facade image;
detecting window frame coordinates by adopting a frame detection algorithm based on the adjusted multilayer first building facade image and the multilayer facade window candidate area;
and generating a mask image of the facade window according to the window frame coordinates.
4. The method of claim 2, wherein after performing feature extraction on the facade window in the first building facade image from the facade window data set to generate the facade window candidate region, the method further comprises:
and performing redundant suppression on the overlapped facade window candidate region based on a non-maximum suppression algorithm.
5. The method of claim 1, wherein the repair generative model generates a countermeasure network, the generating the countermeasure network comprising: a generator and at least one discriminator; the method further comprises the following steps:
mapping the generator sample mask image to a real building facade image to obtain a composite image, wherein the sample mask image is a mask image containing a standard facade window;
and distinguishing the synthetic image and the real building facade image through the discriminator so as to train and obtain the generated countermeasure network.
6. The method of claim 5, wherein discriminatively distinguishing, by the discriminator, the composite image from the real building facade image to train acquisition of the generative confrontation network comprises:
respectively carrying out multi-layer down-sampling on the synthetic image and the real building facade image to obtain a synthetic image of a multi-layer facade window and a real building facade image of the multi-layer facade window;
and distinguishing the synthetic image of the multi-layer facade window and the real building facade image of the multi-layer facade window through a plurality of discriminators respectively so as to train and obtain the generated countermeasure network.
7. The method of claim 1, wherein said using live action three dimensional data to obtain a first building facade image and a facade window data set in said first building facade image comprises:
acquiring a patch cluster of each building contained in the live-action three-dimensional data according to the live-action three-dimensional data;
identifying and obtaining the outer contour of each building according to the patch cluster of the building;
determining the number of facades of each building according to the outer contour, and extracting a grid surface of each facade of each building according to a preset constraint condition to obtain a facade image of each facade, wherein the facade image comprises the first building facade image;
and processing the first building facade image to obtain a facade window data set in the first building facade image.
8. A building facade repair apparatus comprising: the device comprises a processing module, a generating module, a mask repairing module and a repairing generating module;
the processing module is used for acquiring a first building facade image and a facade window data set in the first building facade image by adopting live-action three-dimensional data, wherein the facade window data set comprises a label of a facade window in the first building facade image;
the generating module is used for positioning a facade window in the first building facade image according to the facade window data set and generating a mask image of the facade window;
the mask repairing module is used for repairing the mask image of the facade window to obtain a repaired mask image;
and the restoration generating module is used for restoring the first building facade image based on the restored mask image by adopting a restoration generating model to obtain a restored second building facade image.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing program instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is running, the processor executing the program instructions to perform the steps of the building facade image restoration method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the building facade image restoration method according to any one of claims 1 to 7.
CN202111405222.8A 2021-11-24 2021-11-24 Building facade image restoration method, device, equipment and storage medium Pending CN114066768A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111405222.8A CN114066768A (en) 2021-11-24 2021-11-24 Building facade image restoration method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111405222.8A CN114066768A (en) 2021-11-24 2021-11-24 Building facade image restoration method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114066768A true CN114066768A (en) 2022-02-18

Family

ID=80275895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111405222.8A Pending CN114066768A (en) 2021-11-24 2021-11-24 Building facade image restoration method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114066768A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693880A (en) * 2022-03-31 2022-07-01 机械工业勘察设计研究院有限公司 Building mesh model facade finishing method
CN115564661A (en) * 2022-07-18 2023-01-03 武汉大势智慧科技有限公司 Automatic restoration method and system for building glass area vertical face
CN116091365A (en) * 2023-04-07 2023-05-09 深圳开鸿数字产业发展有限公司 Triangular surface-based three-dimensional model notch repairing method, triangular surface-based three-dimensional model notch repairing device, triangular surface-based three-dimensional model notch repairing equipment and medium
CN116342783A (en) * 2023-05-25 2023-06-27 吉奥时空信息技术股份有限公司 Live-action three-dimensional model data rendering optimization method and system
CN116958381A (en) * 2023-06-20 2023-10-27 西南交通大学 Automatic generation method for building facade replacement texture
CN117036636A (en) * 2023-10-10 2023-11-10 吉奥时空信息技术股份有限公司 Texture reconstruction method for three-dimensional model of live-action building based on texture replacement
CN117195338A (en) * 2023-09-11 2023-12-08 广州水纹厨房工程设计有限公司 Automatic generation method and device of product design diagram, electronic equipment and storage medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693880A (en) * 2022-03-31 2022-07-01 机械工业勘察设计研究院有限公司 Building mesh model facade finishing method
CN114693880B (en) * 2022-03-31 2023-11-24 机械工业勘察设计研究院有限公司 Building mesh model elevation trimming method
CN115564661A (en) * 2022-07-18 2023-01-03 武汉大势智慧科技有限公司 Automatic restoration method and system for building glass area vertical face
CN115564661B (en) * 2022-07-18 2023-10-10 武汉大势智慧科技有限公司 Automatic repairing method and system for building glass area elevation
CN116091365A (en) * 2023-04-07 2023-05-09 深圳开鸿数字产业发展有限公司 Triangular surface-based three-dimensional model notch repairing method, triangular surface-based three-dimensional model notch repairing device, triangular surface-based three-dimensional model notch repairing equipment and medium
CN116342783A (en) * 2023-05-25 2023-06-27 吉奥时空信息技术股份有限公司 Live-action three-dimensional model data rendering optimization method and system
CN116342783B (en) * 2023-05-25 2023-08-08 吉奥时空信息技术股份有限公司 Live-action three-dimensional model data rendering optimization method and system
CN116958381A (en) * 2023-06-20 2023-10-27 西南交通大学 Automatic generation method for building facade replacement texture
CN117195338A (en) * 2023-09-11 2023-12-08 广州水纹厨房工程设计有限公司 Automatic generation method and device of product design diagram, electronic equipment and storage medium
CN117195338B (en) * 2023-09-11 2024-05-28 广州水纹厨房工程设计有限公司 Automatic generation method and device of product design diagram, electronic equipment and storage medium
CN117036636A (en) * 2023-10-10 2023-11-10 吉奥时空信息技术股份有限公司 Texture reconstruction method for three-dimensional model of live-action building based on texture replacement
CN117036636B (en) * 2023-10-10 2024-01-23 吉奥时空信息技术股份有限公司 Texture reconstruction method for three-dimensional model of live-action building based on texture replacement

Similar Documents

Publication Publication Date Title
CN114066768A (en) Building facade image restoration method, device, equipment and storage medium
US20210312710A1 (en) Systems and methods for processing 2d/3d data for structures of interest in a scene and wireframes generated therefrom
Chen et al. A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds
US11816907B2 (en) Systems and methods for extracting information about objects from scene information
Zhou et al. Seamless fusion of LiDAR and aerial imagery for building extraction
Bulatov et al. Context-based automatic reconstruction and texturing of 3D urban terrain for quick-response tasks
CN110084304B (en) Target detection method based on synthetic data set
CN111527467A (en) Method and apparatus for automatically defining computer-aided design files using machine learning, image analysis, and/or computer vision
CN110135455A (en) Image matching method, device and computer readable storage medium
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
CN113359782B (en) Unmanned aerial vehicle autonomous addressing landing method integrating LIDAR point cloud and image data
CN103839286B (en) The true orthophoto of a kind of Object Semanteme constraint optimizes the method for sampling
CN113192200B (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
CN112560675A (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN111652241A (en) Building contour extraction method fusing image features and dense matching point cloud features
Frommholz et al. Extracting semantically annotated 3D building models with textures from oblique aerial imagery
Tutzauer et al. Façade reconstruction using geometric and radiometric point cloud information
CN114549956A (en) Deep learning assisted inclined model building facade target recognition method
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
Zou et al. Automatic segmentation, inpainting, and classification of defective patterns on ancient architecture using multiple deep learning algorithms
CN114273826A (en) Automatic identification method for welding position of large-sized workpiece to be welded
Xu et al. Deep learning guided building reconstruction from satellite imagery-derived point clouds
Tripodi et al. Brightearth: Pipeline for on-the-fly 3D reconstruction of urban and rural scenes from one satellite image
CN114758087B (en) Method and device for constructing urban information model
Lu et al. Image-based 3D reconstruction for Multi-Scale civil and infrastructure Projects: A review from 2012 to 2022 with new perspective from deep learning methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination