CN114219819A - Oblique photography model unitization method based on orthoscopic image boundary detection - Google Patents

Oblique photography model unitization method based on orthoscopic image boundary detection Download PDF

Info

Publication number
CN114219819A
CN114219819A CN202111373225.8A CN202111373225A CN114219819A CN 114219819 A CN114219819 A CN 114219819A CN 202111373225 A CN202111373225 A CN 202111373225A CN 114219819 A CN114219819 A CN 114219819A
Authority
CN
China
Prior art keywords
building
model
boundary
image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111373225.8A
Other languages
Chinese (zh)
Inventor
辛佩康
高丙博
吴友
余芳强
张铭
谷志旺
刘寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Construction No 4 Group Co Ltd
Original Assignee
Shanghai Construction No 4 Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Construction No 4 Group Co Ltd filed Critical Shanghai Construction No 4 Group Co Ltd
Priority to CN202111373225.8A priority Critical patent/CN114219819A/en
Publication of CN114219819A publication Critical patent/CN114219819A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention fully utilizes the advantages of the orthographic image and the real-scene model in the same geographic coordinate system, applies the oblique photography model and the depth learning method, realizes the automatic extraction of the building outline based on the orthographic image boundary detection, further utilizes the coordinate attribute of the orthographic image to obtain the specific geographic coordinate position after the building is integrated, realizes the extraction of the integrated information of the oblique photography model, improves the efficiency of the integrated automatic extraction of the building, and simultaneously provides data support for the later-stage integrated management. The invention uses the orthoimage to carry out the single study of the oblique photography live-action model, can realize the automatic extraction of the single, and solves the problems of low extraction efficiency of the ground object vector boundary, deviation of coordinate positioning and the like in the single study of the live-action model.

Description

Oblique photography model unitization method based on orthoscopic image boundary detection
Technical Field
The invention relates to a method for unitizing an oblique photography model based on orthoscopic image boundary detection.
Background
In recent years, with the popularization of civil unmanned aerial vehicles and the rapid development of oblique photography technologies, the actual situation of a building ground object is truly reflected by carrying a multi-lens sensor on the same flight platform and simultaneously carrying out oblique photography from multiple angles in the vertical direction and the oblique direction, and the rapid generation of a ground object real-scene three-dimensional model becomes an important means for acquiring three-dimensional space information data. And the extraction of the three-dimensional space information of the building has important significance for planning and managing cities and villages. However, the three-dimensional live-action model obtained based on the oblique photography technique lacks structured semantic information, and cannot select and separate a single building. In practical applications, such a model only remains at the level of model browsing, geometric measurement, and the like, and operations such as selection, indexing, attribute information addition, and individualized management cannot be performed on the ground feature individual, so that it is necessary to perform individualized processing on the oblique photography live-action model.
At present, the common methods of monomerization are mainly cutting monomerization, reconstruction monomerization, ID monomerization and dynamic monomerization. The dynamic singleization can directly utilize two-dimensional vector data, updating and classifying cost is low, a sawtooth edge does not exist during rendering, LOD indexes are unchanged, and different data application requirements can be met. At the present stage, when the operation of singleization is carried out, it is inevitable to carry out manual delineation to building outline boundary and obtain geographical position information through software operation, and this has caused the singleization process to have certain demand in manpower and time energy, and when carrying out manual delineation to the building simultaneously, the boundary standard that everyone defined is not unique, and this also has certain difficulty to later stage planning and management.
Therefore, it is a problem how to uniformly and rapidly complete the monomer-forming operation.
Disclosure of Invention
The invention aims to provide a method for unitizing a tilted photography model based on orthoscopic image boundary detection.
In order to solve the above problems, the present invention provides a method for unitizing an oblique photography model based on an orthoimage boundary detection, comprising:
step S1, acquiring images from multiple angles through an aircraft platform carrying a multi-lens sensor, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion;
step S2, constructing a neural network model, training the neural network model, and carrying out boundary detection on the orthophoto image by using the trained neural network model to obtain a building orthophoto projection contour boundary;
step S3, regularizing the building orthographic projection contour boundary to obtain a building orthographic projection contour boundary after the orthographic image regularization, performing real-scene model coordinate transformation on the building orthographic projection contour boundary after the orthographic image regularization to obtain a geographical coordinate value of each corner point of the building outline, and generating a real-scene model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline;
and step S4, building a building bounding box model based on the real scene model building boundary plane vector diagram, superposing the bounding box model on the oblique photography three-dimensional model to obtain a superposed three-dimensional model, rendering triangular surface sheets in the superposed three-dimensional model and superposing specified colors, thereby realizing automatic singleization of the oblique photography real scene model.
Further, in the above method, in step S1, acquiring images from multiple angles through an aircraft platform carrying multiple lens sensors, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion, the method includes:
step S11, carrying out multi-azimuth and multi-angle aerial photography on a target area from the air by using an aircraft platform carrying a multi-lens sensor to obtain a sequence image with a preset overlapping degree;
step S12, reconstructing and generating an oblique photography three-dimensional model according to the sequence images by using live-action modeling software, and deriving aerial image dense matching point clouds;
and step S13, generating an orthoimage without projection distortion according to the sequence image by using the live-action modeling software, wherein the geographic coordinate system of the orthoimage is consistent with the geographic coordinate system of the oblique photography three-dimensional model.
Further, in the above method, in step S2, constructing a neural network model and training the neural network model, includes:
step S21, building a deep learning framework and building a neural network model for detecting the building boundary image;
step S22, establishing a training set, a verification set and a test set;
step S23, inputting a training set and a verification set into the neural network model for building boundary image detection, setting the training environment, the training times, the training threshold value and the training step pitch of the neural network model for building boundary image detection, executing model training, and reserving model parameters after the training is finished to obtain an initial boundary detection training model;
step S24, testing the initial boundary detection training model by using the test set, finishing model training and testing if the accuracy of the test result is more than or equal to 97%, and taking the initial boundary detection training model as a trained neural network model; if the accuracy of the detection result is less than 97%, performing iterative optimization on the parameters of the initial boundary detection training model in a data set expansion, data enhancement and over-parameter adjustment mode until the accuracy of the detection result is greater than or equal to 97%, and obtaining a trained neural network model.
Further, in the method, in step S22, establishing a training set and a verification set includes:
step S221, selecting a data set similar to the building style of a research area;
step S222, selecting a part of the ortho-images, and carrying out building outline marking on the selected part of the ortho-images by using an image marking tool to obtain marked ortho-images;
and step S223, randomly disordering the data sets similar to the architectural style of the research area and the marked orthographic images to fuse and establish a model training data set, and dividing the model training data set into a training set, a verification set and a test set according to a preset proportion.
Further, in the above method, in step S2, the method of performing boundary detection on the orthophoto image by using the trained neural network model to obtain a building orthophoto projection contour boundary includes:
step S25, determining a cutting size based on the trained neural network model and receiving the image size, and performing uniform image cutting on the ortho image based on the cutting size to obtain an ortho image prediction set;
and step S26, detecting all the orthoimage prediction sets by using the trained neural network model to obtain building block binary images of all the orthoimage prediction sets.
Further, in the above method, the step S3 of regularizing the building orthographic projection contour boundary to obtain a regularized building orthographic projection contour boundary of an orthographic image includes:
step S301, based on an arbitrary polygon seed filling method, hole filling is carried out on a building block binary image of the orthophoto image prediction image set to obtain a filled building block binary image;
step S302, building blocks with local connection are segmented in the filled building block binary image by using a watershed algorithm to obtain a segmented binary image;
step S303, utilizing a corrosion algorithm to expand gaps among the building blocks in the divided binary image so as to obtain an optimized building block binary image;
step S304, extracting the building outline of the building block binary image after the step optimization based on a binary image outline extraction algorithm;
step S305, performing approximate fitting processing on the extracted boundaries of the building outlines by using a polygon fitting curve method to obtain building outline boundary polygons;
step S306, acquiring the length and azimuth angle of each edge in the building outline boundary polygon;
step S307, comparing the length of each side of the building outline boundary polygon, and selecting the longest side as a main direction;
step S308, rotating the building outline boundary around a central point to a position vertical or parallel to the main direction to obtain the rotated building outline boundary;
step S309, correcting adjacent edges of the rotated building outline boundary, and taking an intersection point when the adjacent edges are vertical; when the adjacent edges are parallel, based on the distance threshold of the adjacent edges, the short edge is translated to the long edge or a straight line is added to the adjacent edges, and finally the regular building orthographic projection outline boundary of the orthographic image is generated.
Further, in the above method, in step S3, the performing real-world model coordinate transformation on the boundary of the orthographic projection contour of the building after the orthographic image is regularized to obtain a geographic coordinate value of each corner point of the building contour, including:
step S321, extracting an affine matrix of the orthoimage based on a GDAL grid space data conversion library, wherein the affine matrix comprises geographic coordinates (X, Y) of an upper left corner point of the image, pixel coordinates and a conversion scaling ratio alpha of actual geographic coordinates;
step S322, constructing the corner pixel coordinates (X, y) and the corresponding geographic coordinates (X) of the regular building orthographic projection contour boundary of any orthographic image in the orthographic imagesCOOR,YCOOR) The transfer function of (a) is:
XCOOR=X+x·α;
YCOOR=Y+y·α;
step S322, converting the pixel coordinates of each corner point of the regular building orthographic projection contour boundary of each orthographic image into corresponding geographic coordinates according to the conversion function.
Further, in the above method, in step S3, generating a real-world model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline, including:
and step S331, generating a real-scene model building boundary plane vector diagram based on the geographic coordinate value of each corner point of the building outline, wherein the real-scene model building boundary plane vector diagram and the oblique photography three-dimensional model are in the same geographic coordinate system.
Further, in the above method, in step S4, the building bounding box model is built based on the real-world model building boundary plane vector diagram, including:
step S401, obtaining three-dimensional geographic coordinate information of an aerial triangular connection point according to the aerial triangulation result;
s402, judging the inclusion relation between the aerial triangle connection points and the building outline based on the ray method principle according to the geographic coordinate value of each corner point of the building outline and the three-dimensional geographic coordinate of the aerial triangle connection points, and screening out the aerial triangle connection points in each building outline;
step S403, comparing elevations of the aerial triangular connecting points in each building outline to obtain the lowest elevation and the highest elevation in each building outline;
step S404, subtracting the lowest elevation from the highest elevation in each building outline to obtain the height of the bounding box of the building unit;
step S405, using the vector geometric polygon in the real scene model building boundary plane vector diagram as the lower bottom surface of the bounding box model of the building single body, and using the height of the bounding box of the building single body as the height of the bounding box model, and creating the building single body bounding box polyhedral model.
Further, in the above method, superimposing the bounding box model on the oblique photography three-dimensional model to obtain a superimposed three-dimensional model, rendering a triangular surface in the superimposed three-dimensional model and superimposing a specified color, thereby implementing automatic singleization of the oblique photography live-action model, including:
and superposing the building monomer bounding box model to the lowest elevation position in the building outline of the oblique photography three-dimensional model to obtain a composite model, and rendering and superposing the composite model with a specified color, so that the highlight display of the model singleness is realized, and the dynamic singleness of the oblique photography real scene model is further realized.
Compared with the prior art, the method fully utilizes the advantages that the ortho-image and the real scene model have the same geographic coordinate system, applies the oblique photography model and the deep learning method, realizes the automatic extraction of the building outline based on the ortho-image boundary detection, further obtains the concrete geographic coordinate position after the building is singulated by utilizing the coordinate attribute of the ortho-image, realizes the singulation information extraction of the oblique photography model, improves the singulation automatic extraction efficiency of the building, and simultaneously provides data support for later-stage singulation management.
The invention uses the orthoimage to carry out the single study of the oblique photography live-action model, can realize the automatic extraction of the single, and solves the problems of low extraction efficiency of the ground object vector boundary, deviation of coordinate positioning and the like in the single study of the live-action model.
Drawings
FIG. 1 is a flowchart of a method for unitizing an orthophoto image based on edge detection of an orthophoto image;
FIG. 2 is a block binary image of an orthophoto building according to an embodiment of the present invention;
FIG. 3 is an orthographic image of a boundary of a building outline according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the present invention provides a method for unitizing an oblique photography model based on orthoimage boundary detection, comprising:
step S1, acquiring images from multiple angles through an aircraft platform carrying a multi-lens sensor, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion;
step S2, constructing a neural network model, training the neural network model, and carrying out boundary detection on the orthophoto image by using the trained neural network model to obtain a building orthophoto projection contour boundary;
step S3, regularizing the building orthographic projection contour boundary to obtain a building orthographic projection contour boundary after the orthographic image regularization, performing real-scene model coordinate transformation on the building orthographic projection contour boundary after the orthographic image regularization to obtain a geographical coordinate value of each corner point of the building outline, and generating a real-scene model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline;
and step S4, building a building bounding box model based on the real scene model building boundary plane vector diagram, superposing the bounding box model on the oblique photography three-dimensional model to obtain a superposed three-dimensional model, rendering triangular surface sheets in the superposed three-dimensional model and superposing specified colors, thereby realizing automatic singleization of the oblique photography real scene model.
The orthoimage has the characteristics of rich image information, intuition, reality, no projection distortion and the like, has good interpretation and measurement performance, and can determine the geographical position information of the building through the coordinate position relationship between the orthoimage and the real-scene model.
The invention uses the orthoimage to carry out the single study of the oblique photography live-action model, can realize the automatic extraction of the single, and solves the problems of low extraction efficiency of the ground object vector boundary, deviation of coordinate positioning and the like in the single study of the live-action model.
The invention fully utilizes the advantages of the orthoimage and the real-scene model in the same geographic coordinate system, applies the oblique photography model and the deep learning method, realizes the automatic extraction of the building outline based on the orthoimage boundary detection, further utilizes the coordinate attribute of the orthoimage to obtain the concrete geographic coordinate position after the building is integrated, realizes the extraction of the single information of the oblique photography model, improves the efficiency of the single automatic extraction of the building, and simultaneously provides data support for the later-stage single management.
In an embodiment of the oblique photography model unitization method based on the orthoimage boundary detection of the present invention, step S1, acquiring an image from multiple angles through an aircraft platform carrying multiple lens sensors, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion, includes:
step S11, carrying out multi-azimuth and multi-angle aerial photography on a target area from the air by using an aircraft platform carrying a multi-lens sensor to obtain a sequence image with a preset overlapping degree;
step S12, reconstructing and generating an oblique photography three-dimensional model according to the sequence images by using live-action modeling software, and deriving aerial image dense matching point clouds;
and step S13, generating an orthoimage without projection distortion according to the sequence image by using the live-action modeling software, wherein the geographic coordinate system of the orthoimage is consistent with the geographic coordinate system of the oblique photography three-dimensional model.
In an embodiment of the oblique photography model unitization method based on the orthoimage boundary detection, in step S2, the constructing and training of the neural network model includes:
step S21, building a deep learning framework and building a neural network model for detecting the building boundary image;
step S22, establishing a training set, a verification set and a test set;
step S23, inputting a training set and a verification set into the neural network model for building boundary image detection, setting the training environment, the training times, the training threshold value and the training step pitch of the neural network model for building boundary image detection, executing model training, and reserving model parameters after the training is finished to obtain an initial boundary detection training model;
step S24, testing the initial boundary detection training model by using the test set, finishing model training and testing if the accuracy of the test result is more than or equal to 97%, and taking the initial boundary detection training model as a trained neural network model; if the accuracy of the detection result is less than 97%, performing iterative optimization on the parameters of the initial boundary detection training model in a data set expansion, data enhancement and over-parameter adjustment mode until the accuracy of the detection result is greater than or equal to 97%, and obtaining a trained neural network model.
In an embodiment of the oblique photography model unitization method based on the orthoimage boundary detection of the present invention, in step S22, establishing a training set and a verification set includes:
step S221, selecting a data set similar to the building style of a research area;
step S222, selecting a part of the ortho-images, and carrying out building outline marking on the selected part of the ortho-images by using an image marking tool to obtain marked ortho-images;
and step S223, randomly disordering the data sets similar to the architectural style of the research area and the marked orthographic images to fuse and establish a model training data set, and dividing the model training data set into a training set, a verification set and a test set according to a preset proportion.
In an embodiment of the oblique photography model singleness method based on orthoimage boundary detection of the present invention, in step S2, performing boundary detection on an orthoimage by using a trained neural network model to obtain a building orthoimage projection contour boundary, the method includes:
step S25, determining a cutting size based on the trained neural network model and receiving the image size, and performing uniform image cutting on the ortho image based on the cutting size to obtain an ortho image prediction set;
and step S26, detecting all the orthoimage prediction sets by using the trained neural network model to obtain building block binary images of all the orthoimage prediction sets.
In an embodiment of the oblique photography model unitization method based on the ortho-image boundary detection of the present invention, the step S3 of regularizing the building ortho-projection contour boundary to obtain the regular building ortho-projection contour boundary of the ortho-image includes:
step S301, based on an arbitrary polygon seed filling method, hole filling is carried out on a building block binary image of the orthophoto image prediction image set to obtain a filled building block binary image;
step S302, building blocks with local connection are segmented in the filled building block binary image by using a watershed algorithm to obtain a segmented binary image;
step S303, utilizing a corrosion algorithm to expand gaps among the building blocks in the divided binary image so as to obtain an optimized building block binary image;
step S304, extracting the building outline of the building block binary image after the step optimization based on a binary image outline extraction algorithm;
step S305, performing approximate fitting processing on the extracted boundaries of the building outlines by using a polygon fitting curve method to obtain building outline boundary polygons;
step S306, acquiring the length and azimuth angle of each edge in the building outline boundary polygon;
step S307, comparing the length of each side of the building outline boundary polygon, and selecting the longest side as a main direction;
step S308, rotating the building outline boundary around a central point to a position vertical or parallel to the main direction to obtain the rotated building outline boundary;
step S309, correcting adjacent edges of the rotated building outline boundary, and taking an intersection point when the adjacent edges are vertical; when the adjacent edges are parallel, based on the distance threshold of the adjacent edges, the short edge is translated to the long edge or a straight line is added to the adjacent edges, and finally the regular building orthographic projection outline boundary of the orthographic image is generated.
In an embodiment of the oblique photography model unitization method based on orthoimage boundary detection of the present invention, step S3, performing real-world model coordinate transformation on the boundary of the orthoimage-regularized architectural orthographic projection contour to obtain a geographic coordinate value of each corner point of the architectural contour includes:
step S321, extracting an affine matrix of the orthoimage based on a GDAL grid space data conversion library, wherein the affine matrix comprises geographic coordinates (X, Y) of an upper left corner point of the image, pixel coordinates and a conversion scaling ratio alpha of actual geographic coordinates (the ratio has positive and negative);
step S322, constructing the corner pixel coordinates (X, y) and the corresponding geographic coordinates (X) of the regular building orthographic projection contour boundary of any orthographic image in the orthographic imagesCOOR,YCOOR) The transfer function of (a) is:
XCOOR=X+x·α;
YCOOR=Y+y·α;
step S322, converting the pixel coordinates of each corner point of the regular building orthographic projection contour boundary of each orthographic image into corresponding geographic coordinates according to the conversion function.
In an embodiment of the oblique photography model unitization method based on orthoimage boundary detection of the present invention, step S3 is to generate a real-scene model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline, including:
and step S331, generating a real-scene model building boundary plane vector diagram based on the geographic coordinate value of each corner point of the building outline, wherein the real-scene model building boundary plane vector diagram and the oblique photography three-dimensional model are in the same geographic coordinate system.
In an embodiment of the oblique photography model unitization method based on the orthoimage boundary detection of the present invention, step S4, the building bounding box model is established based on the real-world model building boundary plane vector diagram, which includes:
step S401, obtaining three-dimensional geographic coordinate information of an aerial triangular connection point according to the aerial triangulation result;
s402, judging the inclusion relation between the aerial triangle connection points and the building outline based on the ray method principle according to the geographic coordinate value of each corner point of the building outline and the three-dimensional geographic coordinate of the aerial triangle connection points, and screening out the aerial triangle connection points in each building outline;
step S403, comparing elevations of the aerial triangular connecting points in each building outline to obtain the lowest elevation and the highest elevation in each building outline;
step S404, subtracting the lowest elevation from the highest elevation in each building outline to obtain the height of the bounding box of the building unit;
step S405, using the vector geometric polygon in the real scene model building boundary plane vector diagram as the lower bottom surface of the bounding box model of the building single body, and using the height of the bounding box of the building single body as the height of the bounding box model, and creating the building single body bounding box polyhedral model.
In an embodiment of the oblique photography model unitization method based on orthoimage boundary detection of the present invention, in step S4, the bounding box model is superimposed on the oblique photography three-dimensional model to obtain a superimposed three-dimensional model, and a triangular surface in the superimposed three-dimensional model is rendered and a specified color is superimposed, so as to realize automatic unitization of the oblique photography real scene model, including:
and superposing the building monomer bounding box model to the lowest elevation position in the building outline of the oblique photography three-dimensional model to obtain a composite model, and rendering and superposing the composite model with a specified color, so that the highlight display of the model singleness is realized, and the dynamic singleness of the oblique photography real scene model is further realized.
Specifically, the oblique photography model unitization method based on orthoimage boundary detection mainly comprises the following steps:
1. three-dimensional live-action model (oblique photography three-dimensional model), and orthoimage acquisition
1.1 aerial data acquisition
Firstly, an aircraft platform carrying a multi-lens sensor is utilized to carry out multi-azimuth and multi-angle aerial photography on a target area from the air to obtain a sequence image with a preset overlapping degree.
For example, the places such as Jian, Guangdong Buddha and Gengjiang of China can be selected as aerial photography target areas, a professional unmanned aerial vehicle-longitude and latitude M300RTK in Xinjiang is used for determining the flight line of the unmanned aerial vehicle, the flight line can be ensured to shoot the target areas comprehensively, then the unmanned aerial vehicle is used for carrying a multi-lens sensor to carry out aerial photography on the target areas from multiple angles and multiple directions during cruising, and sequence images with certain overlapping degree are obtained.
1.2 live-action model reconstruction
And then, reconstructing a large number of sequence images obtained in the step 1.1 by using live-action modeling software to generate an oblique photography three-dimensional model, and deriving an aerial triangulation result (dense image matching point cloud).
For example, a large number of sequence images obtained in step 1.1 may be imported into ContextCapture software, and then three-dimensional live-action modeling is performed through processes of feature point extraction, multi-view image matching, adjustment of a beam method local area network, and the like, so as to generate an oblique photography three-dimensional model of locations such as jiangxi gean, guangdong foshan, fujian jin river, and the like, and derive aerial triangulation results (image dense matching point clouds).
1.3 ortho image acquisition
And continuously generating an orthoimage without projection distortion through the live-action modeling software, wherein the geographic coordinate system of the orthoimage is consistent with that of the oblique photography three-dimensional model.
For example, a projective distortion-free true ortho image may be generated by live-action modeling software, the geographic coordinate system of which is consistent with the oblique photography three-dimensional model, and which is the WGS-84 coordinate system.
2. Boundary detection deep learning model training
2.1 neural network model construction
Firstly, a deep learning framework is built, and a neural network model for detecting the building boundary image is built.
For example, a deep learning framework for building contour extraction can be constructed first, and a full convolution neural network U-net model is adopted for feature extraction.
2.2 building training data sets
2.2.1 collecting an image-based building outline detection data set disclosed in the industry, and selecting a data set similar to the building style of a research area according to data attributes;
for example, an image-based building contour detection data set disclosed in the industry can be collected, and a data set similar to the building style of research areas such as Jiangxi Jian, Guangdong Buddha, Fujian Jinjiang and the like is selected according to data attributes;
2.2.2 selecting a part of the ortho images, and carrying out building outline marking on the selected part of the ortho images by using an image marking tool to obtain marked ortho images;
for example, a part of the orthoimage can be selected, and the building outline can be labeled on the orthoimage by using an image labeling tool labelme;
2.2.3 randomly disordering the data in the step 2.2.1 and the step 2.2.2, fusing and establishing a model training data set, and dividing the model training data set into a training set, a verification set and a test set according to a proper proportion.
For example, the data in step 2.2.1 and step 2.2.2 may be randomly scrambled, and model training data sets are fused and established, and 2500 data sets are calculated, and according to the training set: and (4) verification set: the ratio of test sets was 8: 1: 1 partitioning a model training data set.
2.3 model training
Inputting the training set and the verification set obtained in the step 2.2.3 into the building boundary image detection neural network model established in the step 2.1, setting the training environment, the training times, the training threshold and the training step distance of the building boundary image detection neural network model, executing model training, and keeping model parameters after the training is finished so as to obtain an initial boundary detection training model.
For example, 2000 training sets and 250 verification sets obtained in step 2.2.3 may be input into the full convolution neural network U-net model constructed in step 2.1, a training environment python3.6, a pytorch1.4, a cuda10.0, a learning rate Lr of 0.0001, a batch size of 4, an iteration number epochs of 100 are set, and U-net model training is performed, and after the training is completed, model parameters are retained to obtain an initial boundary detection training model.
2.4 model testing
Testing the initial boundary detection training model obtained in the step 2.3 by using the test set obtained in the step 2.2.3, and if the accuracy of the detection result is more than or equal to 97 percent and the actual application requirement is met, completing model training and testing to obtain a trained neural network model; if the accuracy of the detection result is less than 97%, the actual application requirement is not met, iterative optimization is carried out through modes of data set expansion, data enhancement, super-parameter adjustment and the like until the accuracy of the detection result is more than or equal to 97%, the actual application requirement is met, and a well-trained neural network model is obtained.
For example, the initial boundary detection training model obtained in step 2.3 may be tested by using the 250 test sets obtained in step 2.2.3, and if the accuracy of the detection result is greater than or equal to 97%, the actual application requirement is met, the model training and testing are completed; if the accuracy of the detection result is less than 97%, the actual application requirement is not met, and iterative optimization is performed through modes of data set expansion, data enhancement, network parameter adjustment and the like until the actual application requirement is met.
3. Ortho image building contour extraction
3.1 orthographic image segmentation
And (4) performing uniform image segmentation on the ortho image obtained in the step 1.3 (the cutting size is determined according to the image size received by the neural network model trained in the step 2.4) to obtain an ortho image prediction set.
For example, the real projective image obtained in step 1.3 may be subjected to uniform image segmentation (with a segmentation size of 1024 × 1024) to obtain a prediction set of the real projective image.
3.2 building Block detection
And (3) detecting all the orthoimage prediction sets obtained in the step (3.1) by using the trained neural network model obtained in the step (2.4) to obtain building block binary images of all the orthoimage prediction sets.
For example, all the true ortho image prediction sets in step 3.1 may be detected by using the full convolutional neural network U-net model trained in step 2.4, so as to obtain the building block binary maps of all the ortho image maps. The actual effect of the building block binary map is shown in figure 2 of the accompanying drawings.
3.3 building outline regularization
3.3.1 building Block binary map optimization
(1) Firstly, hole filling is carried out on the building block binary image obtained in the step 3.2 based on an arbitrary polygon seed filling method, so as to obtain a filled building block binary image;
for example, a fillPoly hole filling function may be first constructed based on an arbitrary polygon seed filling method, and hole filling may be performed on the building block binary map obtained in step 3.2;
(2) then, by utilizing a watershed algorithm, building blocks with local connection are segmented in the filled building block binary image to obtain a segmented binary image;
for example, a watershed algorithm can then be used to segment building blocks where local connections exist;
(3) and continuously utilizing a corrosion algorithm to expand gaps among the building blocks in the segmentation binary image so as to obtain an optimized building block binary image.
For example, erosion algorithms may continue to be used to enlarge gaps between building blocks.
3.3.2 building Profile extraction
(1) Extracting each building contour from the building block binary image optimized in the step 3.3.1 based on a binary image contour extraction algorithm;
for example, a findContours contour extraction function can be constructed based on the principle of a binary image contour extraction algorithm, an area threshold value in the contour extraction function is set to be 500, an aspect ratio threshold value is set to be (0.1, 10), and the building block binary image optimized in the step 3.3.1 is extracted to obtain each building contour;
(2) and performing approximate fitting processing on the extracted boundaries of the building outlines by using a polygon fitting curve method to obtain building outline boundary polygons.
For example, an approxplolydp contour fitting function can be constructed by using a polygon fitting curve method to perform approximate fitting processing on the contour boundary.
3.3.3 building outline boundary regularization
(1) Acquiring the length and azimuth angle of each edge in the building outline boundary polygon obtained in the step 3.3.2;
for example, the length and azimuth of each side of the polygon of the boundary of the building outline obtained in step 3.3.2 can be obtained;
(2) comparing the length of each side of the building outline boundary polygon, and selecting the longest side as a main direction;
for example, the lengths of each side of the polygon of the building outline boundary may be compared, and the longest side may be selected as the principal direction;
(3) rotating the building outline boundary around a central point to a position vertical or parallel to the main direction;
for example, the building outline boundary may be rotated around a center point to a position perpendicular or parallel to the main direction;
(4) and (3) correcting adjacent edges of the building contour boundary, taking an intersection point when the adjacent edges are vertical, translating the short edge to the long edge or adding a straight line on the adjacent edges based on the distance threshold of the adjacent edges when the adjacent edges are parallel, and finally generating the regular building orthographic projection contour boundary of the orthographic image.
For example, adjacent edges of the building contour boundary may be corrected, an intersection point is taken when the adjacent edges are perpendicular, when the adjacent edges are parallel, based on a distance threshold of the adjacent edges, a short edge is translated to a long edge or a straight line is added to the adjacent edges, and finally, the regular building contour boundary of the orthoimage is generated, as shown in fig. 3 in the drawing;
4. building outline geographic coordinate transformation
4.1 extracting an affine matrix of an orthographic image (tiff file) based on a GDAL grid space data conversion library, wherein the affine matrix comprises geographic coordinates (X, Y) of an upper left corner point of the image, and a conversion scaling ratio alpha (the ratio has positive and negative) of pixel coordinates and actual geographic coordinates;
4.2 construct the pixel coordinates (X, y) of the corner point of the boundary of the regular building orthographic projection contour of any orthographic image in the orthographic images and the geographic coordinates (X) thereofCOOR,YCOOR) The transfer function of (a) is:
XCOOR=X+x·α;
YCOOR=Y+y·α;
and 4.3, converting the pixel coordinates of each corner point of the regular building orthographic projection outline boundary of each orthographic image into corresponding geographic coordinates according to a formula in the step 4.2 so as to obtain the geographic coordinate values of each corner point of the building outline.
5. Real scene model building outline vector diagram generation
And (4) generating a real scene model building boundary plane vector diagram (a building contour two-dimensional plane vector diagram) according to the geographical coordinate value of each corner point of the building contour obtained in the step (4), wherein the real scene model building boundary plane vector diagram and the oblique photography three-dimensional model obtained in the step (1.2) are in the same geographical coordinate system.
6. Building single bounding box for creating live-action model
6.1 building cell bounding Box height determination
6.1.1, obtaining three-dimensional geographic coordinate information of the aerial triangular connection points (aerial three connection points) according to the aerial triangular measurement result in the step 1.2;
6.1.2 according to the geographical coordinate value of each corner point of the building outline obtained in the step 4 and the three-dimensional geographical coordinate of the aerial triangle connecting point obtained in the step 1.2, judging the inclusion relationship between the aerial triangle connecting point and the building outline based on a ray method principle, and screening out the aerial triangle connecting point positioned in each building outline;
6.1.3 comparing the elevations of the aerial triangular connecting points in each building outline to obtain the lowest elevation and the highest elevation in each building outline;
6.1.4 then subtracting the lowest elevation from the highest elevation in each building outline to obtain the height of the bounding box of the building unit;
6.2 creating building monomer enclosures
Taking the vector geometric polygon in the real-scene model building boundary plane vector diagram obtained in the step 5 as the lower bottom surface of the bounding box model of the building single body, and taking the height of the bounding box of the building single body obtained in the step 6.1.4 as the height of the bounding box model, and creating a building single body bounding box polyhedral model;
for example, according to the two-dimensional plane vector diagram of the real-scene model building outline obtained in step 5, the vector geometric polygon in the two-dimensional plane vector diagram is used as the lower bottom surface in the bounding box model, the vector geometric polygon of the lower bottom surface is pulled up to the height of the building monomer bounding box determined in step 6.1, then the adjacent points of the vector geometric polygon and the pulled points are sequentially connected to be used as the side surface of the bounding box model, and the building monomer bounding box polyhedral model which is pulled up according to the height of the building and is based on the two-dimensional plane vector geometric polygon is realized.
7. Dynamic singulation display
And (3) superposing the building single bounding box model obtained in the step (6.2) to the lowest elevation position in the building outline of the oblique photography three-dimensional model obtained in the step (1.2) to obtain a composite model, and rendering and superposing the composite model with a specified color, so that the highlight display of the model singleness is realized, and the dynamic singleness of the oblique photography real scene model is further realized.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for unitizing an oblique photography model based on orthoimage boundary detection is characterized by comprising the following steps:
step S1, acquiring images from multiple angles through an aircraft platform carrying a multi-lens sensor, and acquiring an oblique photography three-dimensional model and an orthoimage without projection distortion;
step S2, constructing a neural network model, training the neural network model, and carrying out boundary detection on the orthophoto image by using the trained neural network model to obtain a building orthophoto projection contour boundary;
step S3, regularizing the building orthographic projection contour boundary to obtain a building orthographic projection contour boundary after the orthographic image regularization, performing real-scene model coordinate transformation on the building orthographic projection contour boundary after the orthographic image regularization to obtain a geographical coordinate value of each corner point of the building outline, and generating a real-scene model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline;
and step S4, building a building bounding box model based on the real scene model building boundary plane vector diagram, superposing the bounding box model on the oblique photography three-dimensional model to obtain a superposed three-dimensional model, rendering triangular surface sheets in the superposed three-dimensional model and superposing specified colors, thereby realizing automatic singleization of the oblique photography real scene model.
2. The oblique photography model unitization method based on orthoscopic image boundary detection as claimed in claim 1, wherein the step S1 of acquiring the oblique photography three-dimensional model and the orthoscopic image without projection distortion by capturing the image from multiple angles through an aircraft platform carrying multiple lens sensors comprises:
step S11, carrying out multi-azimuth and multi-angle aerial photography on a target area from the air by using an aircraft platform carrying a multi-lens sensor to obtain a sequence image with a preset overlapping degree;
step S12, reconstructing and generating an oblique photography three-dimensional model according to the sequence images by using live-action modeling software, and deriving aerial image dense matching point clouds;
and step S13, generating an orthoimage without projection distortion according to the sequence image by using the live-action modeling software, wherein the geographic coordinate system of the orthoimage is consistent with the geographic coordinate system of the oblique photography three-dimensional model.
3. The method of claim 1, wherein the step of constructing and training the neural network model in step S2 comprises:
step S21, building a deep learning framework and building a neural network model for detecting the building boundary image;
step S22, establishing a training set, a verification set and a test set;
step S23, inputting a training set and a verification set into the neural network model for building boundary image detection, setting the training environment, the training times, the training threshold value and the training step pitch of the neural network model for building boundary image detection, executing model training, and reserving model parameters after the training is finished to obtain an initial boundary detection training model;
step S24, testing the initial boundary detection training model by using the test set, finishing model training and testing if the accuracy of the test result is more than or equal to 97%, and taking the initial boundary detection training model as a trained neural network model; if the accuracy of the detection result is less than 97%, performing iterative optimization on the parameters of the initial boundary detection training model in a data set expansion, data enhancement and over-parameter adjustment mode until the accuracy of the detection result is greater than or equal to 97%, and obtaining a trained neural network model.
4. The method of claim 3, wherein the step S22 of building a training set and a verification set comprises:
step S221, selecting a data set similar to the building style of a research area;
step S222, selecting a part of the ortho-images, and carrying out building outline marking on the selected part of the ortho-images by using an image marking tool to obtain marked ortho-images;
and step S223, randomly disordering the data sets similar to the architectural style of the research area and the marked orthographic images to fuse and establish a model training data set, and dividing the model training data set into a training set, a verification set and a test set according to a preset proportion.
5. The method of claim 4, wherein the step S2 of using the trained neural network model to perform boundary detection on the orthophoto image to obtain the boundary of the architectural orthophoto projection contour comprises:
step S25, determining a cutting size based on the trained neural network model and receiving the image size, and performing uniform image cutting on the ortho image based on the cutting size to obtain an ortho image prediction set;
and step S26, detecting all the orthoimage prediction sets by using the trained neural network model to obtain building block binary images of all the orthoimage prediction sets.
6. The oblique photography model unitization method based on ortho-image boundary detection as claimed in claim 1, wherein the step S3 of regularizing the architectural ortho-projection contour boundary to obtain an orthorectified image regularized architectural ortho-projection contour boundary comprises:
step S301, based on an arbitrary polygon seed filling method, hole filling is carried out on a building block binary image of the orthophoto image prediction image set to obtain a filled building block binary image;
step S302, building blocks with local connection are segmented in the filled building block binary image by using a watershed algorithm to obtain a segmented binary image;
step S303, utilizing a corrosion algorithm to expand gaps among the building blocks in the divided binary image so as to obtain an optimized building block binary image;
step S304, extracting the building outline of the building block binary image after the step optimization based on a binary image outline extraction algorithm;
step S305, performing approximate fitting processing on each extracted building outline boundary by using a polygon fitting curve method to obtain a building outline boundary polygon;
step S306, acquiring the length and azimuth angle of each edge in the building outline boundary polygon;
step S307, comparing the length of each side of the building outline boundary polygon, and selecting the longest side as a main direction;
step S308, rotating the building outline boundary around a central point to a position vertical or parallel to the main direction to obtain the rotated building outline boundary;
step S309, correcting adjacent edges of the rotated building outline boundary, and taking an intersection point when the adjacent edges are vertical; when the adjacent edges are parallel, based on the distance threshold of the adjacent edges, the short edge is translated to the long edge or a straight line is added to the adjacent edges, and finally the regular building orthographic projection outline boundary of the orthographic image is generated.
7. The method of claim 6, wherein the step S3 of transforming real-world model coordinates of the boundary of the regular orthophoto projection profile of the building to obtain the geographic coordinate value of each corner point of the building profile comprises:
step S321, extracting an affine matrix of the orthoimage based on a GDAL grid space data conversion library, wherein the affine matrix comprises geographic coordinates (X, Y) of an upper left corner point of the image, pixel coordinates and a conversion scaling ratio alpha of actual geographic coordinates;
step S322, constructing the corner pixel coordinates (X, y) and the corresponding geographic coordinates (X) of the regular building orthographic projection contour boundary of any orthographic image in the orthographic imagesCOOR,YCOOR) The transfer function of (a) is:
XCOOR=X+x+α;
YCOOR=Y+y+α;
step S322, converting the pixel coordinates of each corner point of the regular building orthographic projection contour boundary of each orthographic image into corresponding geographic coordinates according to the conversion function.
8. The method of claim 7, wherein the step S3 of generating the real-world model building boundary plane vector diagram based on the geographical coordinate value of each corner point of the building outline comprises:
and step S331, generating a real-scene model building boundary plane vector diagram based on the geographic coordinate value of each corner point of the building outline, wherein the real-scene model building boundary plane vector diagram and the oblique photography three-dimensional model are in the same geographic coordinate system.
9. The method of claim 1, wherein the step S4 of building the building bounding box model based on the real world model building boundary plane vector diagram comprises:
step S401, obtaining three-dimensional geographic coordinate information of an aerial triangular connection point according to the aerial triangulation result;
s402, judging the inclusion relation between the aerial triangle connection points and the building outline based on the ray method principle according to the geographic coordinate value of each corner point of the building outline and the three-dimensional geographic coordinate of the aerial triangle connection points, and screening out the aerial triangle connection points in each building outline;
step S403, comparing elevations of the aerial triangular connecting points in each building outline to obtain the lowest elevation and the highest elevation in each building outline;
step S404, subtracting the lowest elevation from the highest elevation in each building outline to obtain the height of the bounding box of the building unit;
step S405, using the vector geometric polygon in the real scene model building boundary plane vector diagram as the lower bottom surface of the bounding box model of the building single body, and using the height of the bounding box of the building single body as the height of the bounding box model, and creating the building single body bounding box polyhedral model.
10. The method of claim 9, wherein superimposing the bounding box model on the oblique photography three-dimensional model to obtain a superimposed three-dimensional model, rendering triangular patches in the superimposed three-dimensional model and superimposing specified colors, thereby achieving automatic singulation of the oblique photography real scene model, comprises:
and superposing the building monomer bounding box model to the lowest elevation position in the building outline of the oblique photography three-dimensional model to obtain a composite model, and rendering and superposing the composite model with a specified color, so that the highlight display of the model singleness is realized, and the dynamic singleness of the oblique photography real scene model is further realized.
CN202111373225.8A 2021-11-19 2021-11-19 Oblique photography model unitization method based on orthoscopic image boundary detection Pending CN114219819A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111373225.8A CN114219819A (en) 2021-11-19 2021-11-19 Oblique photography model unitization method based on orthoscopic image boundary detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111373225.8A CN114219819A (en) 2021-11-19 2021-11-19 Oblique photography model unitization method based on orthoscopic image boundary detection

Publications (1)

Publication Number Publication Date
CN114219819A true CN114219819A (en) 2022-03-22

Family

ID=80697550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111373225.8A Pending CN114219819A (en) 2021-11-19 2021-11-19 Oblique photography model unitization method based on orthoscopic image boundary detection

Country Status (1)

Country Link
CN (1) CN114219819A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN114429530A (en) * 2022-04-06 2022-05-03 武汉峰岭科技有限公司 Method, system, storage medium and device for automatically extracting three-dimensional model of building
CN115063551A (en) * 2022-08-18 2022-09-16 北京山维科技股份有限公司 Method and device for generating slice orthoimage based on oblique photography three-dimensional model
CN115994987A (en) * 2023-03-21 2023-04-21 天津市勘察设计院集团有限公司 Rural building extraction and vectorization method based on inclined three-dimensional model
CN116597150A (en) * 2023-07-14 2023-08-15 北京科技大学 Deep learning-based oblique photography model full-element singulation method and device
CN116664581A (en) * 2023-08-02 2023-08-29 山东翰林科技有限公司 Oblique photography model quality verification and optimization method
CN117173341A (en) * 2023-10-15 2023-12-05 广东优创合影文化传播股份有限公司 3D modeling projection method and system based on digitization
CN117454495A (en) * 2023-12-25 2024-01-26 北京飞渡科技股份有限公司 CAD vector model generation method and device based on building sketch outline sequence
CN117173341B (en) * 2023-10-15 2024-07-05 广东优创合影文化传播股份有限公司 3D modeling projection method and system based on digitization

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN114429530A (en) * 2022-04-06 2022-05-03 武汉峰岭科技有限公司 Method, system, storage medium and device for automatically extracting three-dimensional model of building
CN114429530B (en) * 2022-04-06 2022-06-24 武汉峰岭科技有限公司 Method, system, storage medium and device for automatically extracting three-dimensional model of building
CN115063551A (en) * 2022-08-18 2022-09-16 北京山维科技股份有限公司 Method and device for generating slice orthoimage based on oblique photography three-dimensional model
CN115063551B (en) * 2022-08-18 2022-11-22 北京山维科技股份有限公司 Method and device for generating slice orthoimage based on oblique photography three-dimensional model
CN115994987A (en) * 2023-03-21 2023-04-21 天津市勘察设计院集团有限公司 Rural building extraction and vectorization method based on inclined three-dimensional model
CN116597150A (en) * 2023-07-14 2023-08-15 北京科技大学 Deep learning-based oblique photography model full-element singulation method and device
CN116597150B (en) * 2023-07-14 2023-09-22 北京科技大学 Deep learning-based oblique photography model full-element singulation method and device
CN116664581A (en) * 2023-08-02 2023-08-29 山东翰林科技有限公司 Oblique photography model quality verification and optimization method
CN116664581B (en) * 2023-08-02 2023-11-10 山东翰林科技有限公司 Oblique photography model quality verification and optimization method
CN117173341A (en) * 2023-10-15 2023-12-05 广东优创合影文化传播股份有限公司 3D modeling projection method and system based on digitization
CN117173341B (en) * 2023-10-15 2024-07-05 广东优创合影文化传播股份有限公司 3D modeling projection method and system based on digitization
CN117454495A (en) * 2023-12-25 2024-01-26 北京飞渡科技股份有限公司 CAD vector model generation method and device based on building sketch outline sequence
CN117454495B (en) * 2023-12-25 2024-03-15 北京飞渡科技股份有限公司 CAD vector model generation method and device based on building sketch outline sequence

Similar Documents

Publication Publication Date Title
CN114219819A (en) Oblique photography model unitization method based on orthoscopic image boundary detection
CN111209915B (en) Three-dimensional image synchronous recognition and segmentation method based on deep learning
US7133551B2 (en) Semi-automatic reconstruction method of 3-D building models using building outline segments
CN102506824B (en) Method for generating digital orthophoto map (DOM) by urban low altitude unmanned aerial vehicle
CN113192193B (en) High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame
CN110866531A (en) Building feature extraction method and system based on three-dimensional modeling and storage medium
WO2018061010A1 (en) Point cloud transforming in large-scale urban modelling
CN114612488A (en) Building-integrated information extraction method, computer device, and storage medium
CN113192200B (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
CN105046251A (en) Automatic ortho-rectification method based on remote-sensing image of environmental No.1 satellite
CN111383335B (en) Crowd funding photo and two-dimensional map combined building three-dimensional modeling method
JP2010525491A (en) Geospatial modeling system and associated method for providing data decimation of geospatial data
WO2022104260A1 (en) Data normalization of aerial images
CN109727255B (en) Building three-dimensional model segmentation method
RU2612571C1 (en) Method and system for recognizing urban facilities
KR101079475B1 (en) A system for generating 3-dimensional urban spatial information using point cloud filtering
Zhao et al. Completing point clouds using structural constraints for large-scale points absence in 3D building reconstruction
KR101079531B1 (en) A system for generating road layer using point cloud data
Ni et al. Applications of 3d-edge detection for als point cloud
CN116543116A (en) Method, system, equipment and terminal for three-dimensional virtual visual modeling of outcrop in field
CN110163962A (en) A method of based on Smart 3D oblique photograph technology export actual landform contour
CN113192204B (en) Three-dimensional reconstruction method for building in single inclined remote sensing image
CN115471619A (en) City three-dimensional model construction method based on stereo imaging high-resolution satellite image
Kim et al. Automatic generation of digital building models for complex structures from LiDAR data
Choi et al. Automatic Construction of Road Lane Markings Using Mobile Mapping System Data.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination