CN113850254A - Building vector outline simplifying method, model and model establishing method based on deep learning - Google Patents

Building vector outline simplifying method, model and model establishing method based on deep learning Download PDF

Info

Publication number
CN113850254A
CN113850254A CN202110980000.2A CN202110980000A CN113850254A CN 113850254 A CN113850254 A CN 113850254A CN 202110980000 A CN202110980000 A CN 202110980000A CN 113850254 A CN113850254 A CN 113850254A
Authority
CN
China
Prior art keywords
building
data
outline
simplification
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110980000.2A
Other languages
Chinese (zh)
Inventor
江宝得
巫勇
许少芬
陈占龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202110980000.2A priority Critical patent/CN113850254A/en
Publication of CN113850254A publication Critical patent/CN113850254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/56Information retrieval; Database structures therefor; File system structures therefor of still image data having vectorial format
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a building vector outline simplification method, a model and a model building method based on deep learning, wherein the simplification process mainly comprises the following three steps: 1) performing convolution feature extraction on the grid building based on a MobileNet2 network; 2) generating an outsourcing rectangle of each building and cutting a feature map of a corresponding position, and using the size of the ROI-Pooling uniform feature map; 3) and taking the extracted feature map as input, and calculating to obtain simplified coordinates of the building by using a recurrent neural network. The invention can directly realize the simplification process from the outline vector of the building to the vector, and avoid information loss and repeated simplification operation caused by converting the simplified result into the vector grid. The invention enhances the shape characteristics of the simplification result by modifying the loss function, realizes the intelligent and automatic integration of the building, has strong comprehensive performance and high robustness of the model, and can be reused in the application scene of the vector building outline simplification with various scales.

Description

Building vector outline simplifying method, model and model establishing method based on deep learning
Technical Field
The invention belongs to the field of building vector outline element simplification, and particularly relates to a building vector outline simplification method, a building vector outline simplification model and a building vector outline model building method based on deep learning.
Background
The planar building is the most important component in a large-scale city map, the residential area formed by the polygons of the building is one of six major elements of a common map, and the realization of automatic integration of the building is always the focus of attention of researchers. Compared with natural elements such as rivers, vegetation and the like, the building has more specific geometric characteristics, the outline of the building is composed of a plurality of vertical line segments, the points are uniformly distributed and are fewer, and concave-convex structures are locally arranged on the boundaries. Under the influence of GPS precision, remote sensing image resolution or human factors, building vector data often has errors of different degrees. With the development of computer vision, the regularity of the boundaries of the polygons of the buildings extracted by adopting the technologies of image segmentation, deep learning and the like is poor, the number of points is redundant, the right-angle characteristics are weak, and the normalized expression of geographic information cannot be realized.
In response to the simplification of buildings, researchers have proposed a number of highly effective approaches. The traditional methods comprise a least square method, a semi-plane division generalization method, a rectangular difference and an improved algorithm thereof, a progressive graph simplification algorithm, a template matching algorithm, an adjacent four-point and an improved algorithm thereof, a building concave structure identification and simplification based on a constraint Delaunay triangulation network and the like. The semi-plane division and summarization cannot keep curves on a building, the rectangular difference ignores the maintenance of the area size to a certain extent, the template matching method has strong dependence on a template library, and the adjacent four-point rule cannot take global features into consideration, so that the algorithm proposed above cannot be widely applied to building simplification in various scenes, and the automatic and intelligent control of the building simplification is difficult to realize.
Machine learning is a solution used in recent years to improve the degree of automation of comprehensive charting, and researchers use it to automatically coordinate various geometric transformation knowledge, thereby overcoming the shortcomings of the conventional methods. Machine learning provides a data-driven drawing comprehensive paradigm without sampling simplification constraints in advance, fuzzy relations between characteristic indexes hidden in existing simplification cases of a drawing operator under a certain drawing comprehensive environment can be learned, such as a vector grid combined mathematical morphology method, a back propagation neural network and the like, the method effectively promotes the intelligentization of building simplification, but the generalization capability of the algorithm is limited, and the processing performance depends on the manually set characteristics and the expression of rules, so the automation, the knowledge and the intelligentization degree of the method need to be enhanced.
With the development of machine learning to deep learning technology, the learning performance of the algorithm is remarkably improved, and the method based on deep learning is successfully applied to raster image processing, which represents a breakthrough of raster data generalization. The deep learning-based method is considered to have a great application prospect in improving the comprehensive efficiency and the automation of mapping. There are still some problems: 1) at present, deep learning is adopted for building materialization simplification, most concepts of semantic segmentation or confrontation generation network are adopted, a binaryzation building grid image and a simplified building label are used for model training, and the simplification result stays in a binaryzation image stage; 2) the edge of the binarization result has irregular phenomena such as missing and the like, and can not well meet the shape characteristic (parallel/vertical/collinear) enhancement principle of buildings; 3) boundary points of the binarization result are extracted by methods such as boundary tracking, the obtained points have poor regularity, high redundancy degree and low generalization degree to buildings, and the points still need to be simplified by a traditional method.
Disclosure of Invention
The invention aims to provide a building vector outline simplification method, a model and a model building method based on deep learning.
The technical scheme for solving the technical problems is as follows:
the building vector outline simplifying model building method based on deep learning comprises the following steps:
step 1, extracting a piece of building outline data before simplification from the existing building vector outline data set before simplification, selecting corresponding simplified building outline data from the simplified building vector outline data set, generating raster data and outsourced rectangle data by utilizing the building outline data before simplification, and constructing label data by utilizing the corresponding simplified building outline data;
repeating the steps until all the building vector outline data before simplification in the building vector outline data set before simplification are traversed, wherein raster data corresponding to all the building outline data before simplification in the data set jointly form a rasterized big picture of the building vector outline data set before simplification, outsourced rectangular data of all the buildings before simplification form an outsourced rectangular data set, and all the label data form a label data set;
step 2, cutting a sliding window frame of a large graph, an outsourcing rectangular data set and a label data set after rasterization of a building vector outline data set before simplification by using a cutting frame with a preset pixel size, reserving buildings completely contained in the frame and corresponding outsourcing rectangular data and label data, removing incomplete buildings to obtain cut building grid data, corresponding outsourcing rectangular data and building outline label data, and storing the cut grid data, the corresponding building outsourcing rectangular data and the cut building outline label data as a training sample into a building training set;
dividing each building in the label data into a small-scale building, a medium-scale building and a large-scale building according to the number of coordinate points contained in each building in the cut building outline label data, storing the small-scale building into a small-scale building regression training set, storing the medium-scale building into a medium-scale building regression training set, and storing the large-scale building into a large-scale building regression training set;
step 3, obtaining a building grid training set, a small-scale building regression training set, a medium-scale building regression training set and a large-scale building regression training set after cutting all the large graph after rasterization of the building vector outline data set before simplification, the outsourcing rectangle data set of the building before simplification and the label data set by using a cutting frame;
step 4, extracting a training sample from the building grid training set, and performing building element feature extraction on grid data in the training sample to obtain feature map data of grid building elements; cutting out a convolution feature map of a corresponding independent building from feature map data of grid building elements by using corresponding outsourcing rectangle data in the training sample;
respectively training regression models by using a small-scale building training set, a medium-scale building training set and a large-scale building training set, and obtaining three regression models corresponding to buildings of different scales after training;
step 5, performing ROI Pooling operation on the convolution characteristic diagram obtained in the step 4 to obtain a characteristic diagram with uniform size;
step 6, classifying the feature maps with unified sizes obtained in the step 5 by using three regression models corresponding to the buildings with different scales in the step 4, obtaining three regression branches for each feature map, scoring the three regression branches by using the IOU index, and taking the regression branch with the highest score as a building simplification result corresponding to each feature map;
step 7, judging whether the current iteration number is greater than the highest iteration number, if so, turning to step 8, if not, calculating a loss value of the simplified result generated in the step 6 and the simplified building outline label data corresponding to the same building in the training sample by using a loss function, judging whether the loss function is converged, if so, turning to step 8, if not, updating a weight matrix and a bias vector through an Adam optimizer, adding 1 to the iteration number, and turning to step 4;
and 8, saving the weight matrix and the offset vector parameters of the last iteration to obtain a building outline simplification model based on deep learning.
Further, the method for generating raster data by using the vector outline data of the building set before simplification in the step 1 comprises the following steps:
rasterizing the building elements, and recording the rasterized row-column value and the corresponding original coordinate value of each coordinate point of the building elements, wherein the image resolution is calculated according to a Li-Openshaw minimum resolution theoretical calculation formula Fc which is D/S when rasterizing is performed, wherein Fc is the pixel size of a raster image, S is an element scale, and D is a minimum resolvable object.
Further, in step 2, the method for cutting the cropping frame is that the cropping frame with the preset pixel size is used as a sliding window to continuously cut the data to be cut from top to bottom, and the data are continuously cut from left to right according to the form that the overlap ratio is 50%, the sliding cutting is stopped until the sliding window completely covers the range of the rasterized large graph of the building vector outline data set before the whole simplification, the complete building in the sliding cropping frame is reserved, the incomplete building is discarded, when the window slides, 50% of adjacent left and right or upper and lower frames are partially overlapped, and the building in the overlapped part is simultaneously used as the data in the two frames.
Further, in the step 2, according to the number of coordinate points included in each building in the cut building outline tag data, a building with 5 points in the building outline is determined as a small-scale building, a building with 6-10 points in the building outline is determined as a medium-scale building and the outline is processed to 10 points, and a building with more than 10 points in the building outline is divided into a large-scale building and the outline is processed to 15 points.
Further, in step 4, building element feature extraction is performed on the cut grid data by using MobileNetv 2.
Further, the loss function in step 7 is a linear combination function of the mean square error of the coordinate point distance and the area difference.
The building vector outline simplifying model based on deep learning is established by the method.
The building set vector outline simplification method based on deep learning utilizes the building outline simplification model based on deep learning and three regression models corresponding to buildings of different scales for simplification, and specifically comprises the following steps:
step 1, taking out data to be simplified one by one from a building set vector outline data set to be simplified, generating raster data and outsourced rectangle data by utilizing the building outline data before each simplification until all the building vector outline data to be simplified in the data set are processed in a traversing manner, wherein the raster data corresponding to all the single building outline data to be simplified in the data set jointly form a rasterized big picture of the building vector outline data set to be simplified, and the outsourced rectangle data of all the building outline data to be simplified form an outsourced rectangle data set;
step 2, cutting a sliding window frame of a large graph and an outsourcing rectangular data set after rasterization of a building vector outline data set to be simplified by using a cutting frame with a preset pixel size, reserving buildings and corresponding outsourcing rectangular data which are completely contained in the frame, removing incomplete buildings to obtain grid data of the building to be simplified and corresponding building external package rectangular data after cutting, and storing the cut grid data and the corresponding building external package rectangular data as a test sample into a building test set;
step 3, extracting test samples from the test samples one by one in a centralized manner, and performing building element feature extraction on the cut grid data in the test samples by using a deep learning-based building outline simplification model to obtain feature map data of grid building elements; cutting out a convolution feature map of a corresponding independent building from the feature map data of the grid building elements by using corresponding outsourcing rectangle data in the test sample;
step 4, performing ROI Pooling operation on the convolution characteristic diagram obtained in the step 3 to obtain a characteristic diagram with uniform size;
and 5, classifying the feature maps with the uniform sizes obtained in the step 4 by using the trained three regression models, obtaining three regression branches for each feature map, scoring the three regression branches by using the IOU index, and taking the regression branch with the highest score as a simplification result of the building corresponding to each feature map.
The invention has the beneficial effects that: the invention trains the model by taking the feature diagram of a single building before simplification and the vertex coordinates of the simplified building as training data, and leads the model to automatically learn various relationships implied in the building simplification from the angle of case learning, thereby directly realizing the simplification process from the building outline vector to the vector and avoiding information loss and repeated simplification operation caused by the simplification result grid turning vector. The invention enhances the shape characteristics of the simplification result by modifying the loss function, realizes the intelligent and automatic integration of the building, has strong comprehensive performance and high robustness of the model, and can be reused in the application scene of the vector building outline simplification with various scales.
Description of the drawings
FIG. 1 is a schematic flow chart of a model building method of the present invention;
FIG. 2 is a schematic flow chart of a model training method of the present invention;
FIG. 3 is a schematic diagram of building set profile data before simplification storing outsourced rectangles;
FIG. 4 is a simplified building set profile data storing vertices;
FIG. 5 is a schematic diagram of the calculation of an area difference loss function;
fig. 6 is a schematic outline view of a simplified architectural diagram using the method of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in FIGS. 1-2, the invention obtains the building vector outline data as the data set of the training sample through the data set disclosed on the network, and adopts the mature building simplification method such as ArcGIS to generate the simplification result of the building vector outline. The method adopts WHU Building Dataset, manually edits 22000 independent Building outline of New Zealand Land Christchurch by the Dataset, and uses the outline as training sample data to simplify the outline by using ArcGIS Building simplification tool to obtain a data label.
1. Data pre-processing
The data preprocessing comprises constructing label data according to the simplified vector building outline in the training sample data set, and generating outsourcing rectangle data and raster data according to the vector building outline before simplification as shown in figures 3-4. The construction methods are given below:
1) the method for constructing the label data comprises the following steps: because the input and output sizes of the convolutional neural network are the same, and the number of the top points of the polygon generated after simplification is not necessarily the same, according to the actual situation, the building is divided into three types, according to the number of the points of the label, the outline after simplification is divided into a small-scale building with 5 points, is divided into a medium-scale building with 6-10 points and processes the outline to 10 points, and is divided into a large-scale building with more than 10 points and processes the outline to 15 points, so that 3 regression branches can be adopted for respective training.
2) Generating raster data: since the deep learning model for target detection adopts pictures as data input, the building vector outline needs to be converted into a binary grid picture. For the problem of determining the resolution of the image during rasterization, the image may be calculated according to the Li-Openshaw minimum resolution theoretical calculation formula Fc — D/S, where Fc is the pixel size of the raster image, S is the element scale, D is the minimum resolvable object (SVO), and SVO is generally 0.2mm, and since the data set scale adopted in this experiment is 1:100000, the resolution of the image Fc — D/S0.2 mm 100000 — 20 m.
3) Data segmentation: setting the input size of a convolutional neural network to be 256 × 256, clipping the binarized grid image of the building, the corresponding wrapped rectangle of the building and the corresponding simplified vector coordinates, reserving the completely contained building, discarding the incomplete grid image of the building, the wrapped rectangle and the corresponding label thereof, and splitting the image by using a fixed sliding window, wherein the window overlapping rate is set to be 50%, and all building elements are ensured to be reserved.
2. Grid line element feature extraction
Based on the cut grid picture data set with the same size (256 × 256), the rasterized curve element features are extracted by using MobileNetv2, and feature map data of grid line elements are obtained. Compared with the standard convolution operation, the method can effectively reduce the calculated amount and reduce the size of the model) and then use the outsourcing rectangle of the building to cut out the convolution characteristic graph of the individual building from the grid image, and at the moment, because the sizes of the buildings are different, the ROI Pooling unified characteristic graph size is adopted for facilitating the training of the subsequent regression network.
3、ROI Pooling
And (4) unifying the sizes of the feature maps based on the feature map data of the grid buildings extracted in the step (3), so that the ROI Pooling operation is performed. ROI Pooling is a layer of maximal pooled sampling proposed by Fast R-CNN to account for different sizes of extracted target candidate boxes (ROIs). Because the convolutional network requires the input images to have the same size, but the sizes of the target objects on the images are different, and if the size of the target object is changed by means of cropping or scaling, the shape characteristics of the target object will be changed, so the ROIPooling takes the thought of SSPNet (spatial Pyramid Pooling) as a reference, and generates an image with a certain size through a specific pooling operation. The specific operation of ROI Pooling is as follows: the method comprises the steps of obtaining n ROI coordinates of candidate frame data after cutting based on feature map data (feature map) of grid line elements and corresponding cutting, mapping the ROI to a position corresponding to the feature map according to the proportion of an original image to the feature map (the size of a feature extraction picture passing through a convolution layer in front is reduced by 32 times, so that the size of an input ROI is reduced by 32 times), dividing a mapped area into identical sections (the number of the sections is determined by the size of output dimensions, the size of the output of a network structure is 7 x 7), and then conducting max posing operation on each section, so that a feature map with the uniform size of each building is obtained.
4. Regression of coordinate points
After step 4, the vertex coordinates of the simplified building outline can be predicted according to the extracted convolution feature map of the building. Setting 2n (n is 5,10 and 15) output values which are coordinates of n coordinate points respectively, expanding the building convolution characteristic diagram into a one-dimensional vector in a training stage, mapping the one-dimensional vector to 2n values of a label, and performing regression calculation by using a sigmoid function to obtain an optimal parameter. In the testing stage, 2n coordinate values are predicted for the building convolution characteristic image of the test data by using the optimal parameters, so that the vector coordinates of the simplified building outline are obtained. According to the classification in the 2 nd point 1), 3 regression branches need to be trained respectively for three different sizes of divided buildings. When the result prediction is carried out on the test data, the branch with the highest Iou effect in the three regression branches is taken as the simplification result of the building.
5. Loss function
And (5) expressing the difference between the coordinates predicted by the model parameters from the characteristic diagram and the coordinates of the label contour vertex in the training process of the step (5) by using a loss function, and optimizing each parameter of the model in the direction of reducing the loss function by using an Adam optimizer, so as to obtain an optimal parameter model with the minimum loss value. The loss function of the model mainly comprises two parts, namely a point regression loss function (MSE) and an area difference loss function before and after simplification. Mean Square Error (MSE) is the most common regression loss function, and is calculated by summing the squared distances between the predicted result and the true label. Assuming that the label and the prediction result have n points, P is the point of the label, and Q is the point of the prediction result:
Figure BDA0003228713820000091
as shown in fig. 5, the area difference loss function is used to enhance the building geometry of the prediction result to make it closer to the area of the building before the simplification. The area of each section of line can be calculated by calculating the area of the polygon, the sum of the area enclosed by the X axis under each section of line can be calculated, the area of each trapezoid can be obtained by multiplying the width by the average height, all the areas are summed, all the points are always traversed in the clockwise direction, the X coordinate of the first point is subtracted from the X coordinate of the second point, namely, the area when the second point moves forwards is positive, the area when the second point moves backwards is negative, and therefore the area of the irregular polygon is calculated.
Let the coordinates of the irregular polygon be ((x)1,y1)(x2,y2)(x3,y3)…(xn,yn) The calculation formula of the polygon area is:
Figure BDA0003228713820000092
so the area difference loss function LareaComprises the following steps:
Larea=|Spredict-Ssample|
wherein Spredict、SsampleRespectively, the polygonal area of the predicted simplified outline and the outline area of the original building.
The loss function of the model is therefore:
Lloss=LMSE+Larea
this loss value of the model is minimized by the training phase of step 5, resulting in an optimal parametric model.
6. Results of the experiment
The trained model is used for carrying out test verification on the test data, and the test result is shown in fig. 6. The visualization of the simplification result shows that the model has good simplification effect on the outline of the building, the shape characteristics of the outline are kept good, the areas of the building before and after simplification are equal, and the requirement of vector outline simplification of the building is met. The intelligence of the model is also very high, the pre-training model can be used for many times, if the simplification requirements of different degrees exist, the most suitable parameters do not need to be searched by repeatedly calling, only different training data need to be input, and the method has important significance for realizing the automation of drawing synthesis.
According to experimental results, the method provided by the text has higher learning capacity in simplification of the building vector outline, can realize simplification of better morphological feature retention, has strong applicability and high robustness, can be used for multiple times, can be applied to different simplification scales, can achieve higher precision only by finely adjusting a pre-training model, and is suitable for multi-task scenes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. The building vector outline simplifying model building method based on deep learning is characterized by comprising the following steps of:
step 1, extracting a piece of building outline data before simplification from the existing building vector outline data set before simplification, selecting corresponding simplified building outline data from the simplified building vector outline data set, generating raster data and outsourced rectangle data by utilizing the building outline data before simplification, and constructing label data by utilizing the corresponding simplified building outline data;
repeating the steps until all the building vector outline data before simplification in the building vector outline data set before simplification are traversed, wherein raster data corresponding to all the building outline data before simplification in the data set jointly form a rasterized big picture of the building vector outline data set before simplification, outsourced rectangular data of all the buildings before simplification form an outsourced rectangular data set, and all the label data form a label data set;
step 2, cutting a sliding window frame of a large graph, an outsourcing rectangular data set and a label data set after rasterization of a building vector outline data set before simplification by using a cutting frame with a preset pixel size, reserving buildings completely contained in the frame and corresponding outsourcing rectangular data and label data, removing incomplete buildings to obtain cut building grid data, corresponding outsourcing rectangular data and building outline label data, and storing the cut grid data, the corresponding building outsourcing rectangular data and the cut building outline label data as a training sample into a building training set;
dividing each building in the label data into a small-scale building, a medium-scale building and a large-scale building according to the number of coordinate points contained in each building in the cut building outline label data, storing the small-scale building into a small-scale building regression training set, storing the medium-scale building into a medium-scale building regression training set, and storing the large-scale building into a large-scale building regression training set;
step 3, obtaining a building grid training set, a small-scale building regression training set, a medium-scale building regression training set and a large-scale building regression training set after cutting all the large graph after rasterization of the building vector outline data set before simplification, the outsourcing rectangle data set of the building before simplification and the label data set by using a cutting frame;
step 4, extracting a training sample from the building grid training set, and performing building element feature extraction on grid data in the training sample to obtain feature map data of grid building elements; cutting out a convolution feature map of a corresponding independent building from feature map data of grid building elements by using corresponding outsourcing rectangle data in the training sample;
respectively training regression models by using a small-scale building training set, a medium-scale building training set and a large-scale building training set, and obtaining three regression models corresponding to buildings of different scales after training;
step 5, performing ROI Pooling operation on the convolution characteristic diagram obtained in the step 4 to obtain a characteristic diagram with uniform size;
step 6, classifying the feature maps with unified sizes obtained in the step 5 by using three regression models corresponding to the buildings with different scales in the step 4, obtaining three regression branches for each feature map, scoring the three regression branches by using the IOU index, and taking the regression branch with the highest score as a building simplification result corresponding to each feature map;
step 7, judging whether the current iteration number is greater than the highest iteration number, if so, turning to step 8, if not, calculating a loss value of the simplified result generated in the step 6 and the simplified building outline label data corresponding to the same building in the training sample by using a loss function, judging whether the loss function is converged, if so, turning to step 8, if not, updating a weight matrix and a bias vector through an Adam optimizer, adding 1 to the iteration number, and turning to step 4;
and 8, saving the weight matrix and the offset vector parameters of the last iteration to obtain a building outline simplification model based on deep learning.
2. The building vector outline simplification model building method based on deep learning of claim 1, wherein the method for generating raster data by using the building set vector outline data before simplification in the step 1 comprises the following steps:
rasterizing the building elements, and recording the rasterized row-column value and the corresponding original coordinate value of each coordinate point of the building elements, wherein the image resolution is calculated according to a Li-Openshaw minimum resolution theoretical calculation formula Fc which is D/S when rasterizing is performed, wherein Fc is the pixel size of a raster image, S is an element scale, and D is a minimum resolvable object.
3. The building vector outline simplification model building method based on deep learning of claim 1, characterized in that in step 2, the method for cutting the cutting frame is to use the cutting frame with a preset pixel size as a sliding window to continuously divide the data to be cut from top to bottom in sequence from left to right according to the form of 50% of overlap ratio, stop sliding cutting after the sliding window completely covers the range of the large graph after the whole building vector outline data set before simplification is rasterized, retain the complete building in the sliding cutting frame, discard the incomplete building, when the window slides, the adjacent left and right or upper and lower frames are partially overlapped by 50%, and the building in the overlapped part is simultaneously used as the data in the two frames.
4. The building vector outline simplification model building method based on deep learning of claim 1, wherein in the step 2, according to the number of coordinate points included in each building in the cut building outline label data, the building outline with 5 points is defined as a small-scale building, the building outline with 6-10 points is defined as a medium-scale building, the outline is processed to 10 points, and the building outline with more than 10 points is divided into a large-scale building and the outline is processed to 15 points.
5. The building vector outline simplification model building method based on deep learning of claim 1, wherein in step 4, MobileNetv2 is used to perform building element feature extraction on the cut grid data.
6. The building vector outline simplification model building method based on deep learning of claim 1, wherein the loss function in step 7 is a linear combination function of coordinate point distance mean square error and area difference.
7. A building vector outline reduction model based on deep learning, which is characterized by being established by the method of any one of claims 1-6.
8. The building set vector outline simplification method based on deep learning is characterized in that the building outline simplification model based on deep learning obtained in the claim 1 and three regression models corresponding to buildings of different scales are utilized for simplification, and the method specifically comprises the following steps:
step 1, taking out data to be simplified one by one from a building set vector outline data set to be simplified, generating raster data and outsourced rectangle data by utilizing the building outline data before each simplification until all the building vector outline data to be simplified in the data set are processed in a traversing manner, wherein the raster data corresponding to all the single building outline data to be simplified in the data set jointly form a rasterized big picture of the building vector outline data set to be simplified, and the outsourced rectangle data of all the building outline data to be simplified form an outsourced rectangle data set;
step 2, cutting a sliding window frame of a large graph and an outsourcing rectangular data set after rasterization of a building vector outline data set to be simplified by using a cutting frame with a preset pixel size, reserving buildings and corresponding outsourcing rectangular data which are completely contained in the frame, removing incomplete buildings to obtain grid data of the building to be simplified and corresponding building external package rectangular data after cutting, and storing the cut grid data and the corresponding building external package rectangular data as a test sample into a building test set;
step 3, extracting test samples from the test samples one by one in a centralized manner, and performing building element feature extraction on the cut grid data in the test samples by using a deep learning-based building outline simplification model to obtain feature map data of grid building elements; cutting out a convolution feature map of a corresponding independent building from the feature map data of the grid building elements by using corresponding outsourcing rectangle data in the test sample;
step 4, performing ROI Pooling operation on the convolution characteristic diagram obtained in the step 3 to obtain a characteristic diagram with uniform size;
and 5, classifying the feature maps with the uniform sizes obtained in the step 4 by using the trained three regression models, obtaining three regression branches for each feature map, scoring the three regression branches by using the IOU index, and taking the regression branch with the highest score as a simplification result of the building corresponding to each feature map.
CN202110980000.2A 2021-08-25 2021-08-25 Building vector outline simplifying method, model and model establishing method based on deep learning Pending CN113850254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110980000.2A CN113850254A (en) 2021-08-25 2021-08-25 Building vector outline simplifying method, model and model establishing method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110980000.2A CN113850254A (en) 2021-08-25 2021-08-25 Building vector outline simplifying method, model and model establishing method based on deep learning

Publications (1)

Publication Number Publication Date
CN113850254A true CN113850254A (en) 2021-12-28

Family

ID=78976182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110980000.2A Pending CN113850254A (en) 2021-08-25 2021-08-25 Building vector outline simplifying method, model and model establishing method based on deep learning

Country Status (1)

Country Link
CN (1) CN113850254A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115482466A (en) * 2022-09-28 2022-12-16 广西壮族自治区自然资源遥感院 Three-dimensional model vegetation area lightweight processing method based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115482466A (en) * 2022-09-28 2022-12-16 广西壮族自治区自然资源遥感院 Three-dimensional model vegetation area lightweight processing method based on deep learning

Similar Documents

Publication Publication Date Title
CN110930454B (en) Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning
CN111145174B (en) 3D target detection method for point cloud screening based on image semantic features
CN112633277A (en) Channel ship board detection, positioning and identification method based on deep learning
CN112580507B (en) Deep learning text character detection method based on image moment correction
CN114694038A (en) High-resolution remote sensing image classification method and system based on deep learning
CN113761999A (en) Target detection method and device, electronic equipment and storage medium
CN113420759B (en) Anti-occlusion and multi-scale dead fish identification system and method based on deep learning
CN113971809A (en) Text recognition method and device based on deep learning and storage medium
CN115100652A (en) Electronic map automatic generation method based on high-resolution remote sensing image
CN113627440A (en) Large-scale point cloud semantic segmentation method based on lightweight neural network
CN115147601A (en) Urban street point cloud semantic segmentation method based on self-attention global feature enhancement
CN113205023B (en) High-resolution image building extraction fine processing method based on prior vector guidance
CN113850254A (en) Building vector outline simplifying method, model and model establishing method based on deep learning
CN115019163A (en) City factor identification method based on multi-source big data
Wang et al. Based on the improved YOLOV3 small target detection algorithm
CN113902792A (en) Building height detection method and system based on improved RetinaNet network and electronic equipment
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN111612802A (en) Re-optimization training method based on existing image semantic segmentation model and application
CN115082778B (en) Multi-branch learning-based homestead identification method and system
Shi et al. Fast classification and detection of marine targets in complex scenes with YOLOv3
CN113077484A (en) Image instance segmentation method
CN114005043B (en) Small sample city remote sensing image information extraction method based on domain conversion and pseudo tag
CN113591810B (en) Vehicle target pose detection method and device based on boundary tight constraint network and storage medium
Liu et al. An Improved YOLOv5 for Transformer Nameplate Text Detection
Jiang et al. Polyline simplification using a region proposal network integrating raster and vector features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination