CN109992809B - Building model construction method and device and storage device - Google Patents

Building model construction method and device and storage device Download PDF

Info

Publication number
CN109992809B
CN109992809B CN201711498685.7A CN201711498685A CN109992809B CN 109992809 B CN109992809 B CN 109992809B CN 201711498685 A CN201711498685 A CN 201711498685A CN 109992809 B CN109992809 B CN 109992809B
Authority
CN
China
Prior art keywords
dimensional
model
target
building
preliminary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711498685.7A
Other languages
Chinese (zh)
Other versions
CN109992809A (en
Inventor
熊友军
潘慈辉
谭圣琦
王先基
庞建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youbixuan Intelligent Robot Co ltd
Shenzhen Ubtech Technology Co ltd
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201711498685.7A priority Critical patent/CN109992809B/en
Publication of CN109992809A publication Critical patent/CN109992809A/en
Application granted granted Critical
Publication of CN109992809B publication Critical patent/CN109992809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Civil Engineering (AREA)
  • Architecture (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a building model construction method, which comprises the following steps: identifying target buildings in a plurality of pictures shot at multiple angles; calculating the pose and point cloud of the target building in each picture; performing combined processing according to the poses and point clouds of the target buildings corresponding to the multiple pictures to obtain a preliminary three-dimensional model; performing two-dimensional segmentation processing on the preliminary three-dimensional model to obtain a preset number of two-dimensional slices; and aligning the two-dimensional slices according to a preset mode to obtain a target three-dimensional model. According to the technical scheme, the pose and the point cloud of the target building in the multiple multi-angle pictures are calculated and combined according to the pose and the point cloud to obtain a primary three-dimensional model, then the obtained three-dimensional model is subjected to two-dimensional segmentation, two-dimensional slices obtained by segmentation are aligned to obtain a target three-dimensional model, and finally the reconstruction of a more accurate model is achieved. The application also provides a device for reconstructing the model and a device with a storage function.

Description

Building model construction method and device and storage device
Technical Field
The present application relates to the field of reconstruction models, and in particular, to a building model construction method, device, and storage device.
Background
In the three-dimensional model building method based on vision in the prior art, abnormal points are generated due to various noises or errors, so that the problems of pits and depressions on the reestablished smooth surface and the like are caused due to the abnormal points. And because the traditional reconstruction model comprises huge amounts of point clouds and grids, the transmission, analysis and further utilization of the three-dimensional model are limited.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a building model construction method and device and a device with a storage function, and the problem that application of a reconstructed model is limited due to abnormal points generated by external factors in a model reconstruction process can be solved.
In order to solve the technical problem, the technical scheme adopted by the application is as follows: a construction method of a building model is provided, which comprises the following steps:
identifying target buildings in a plurality of pictures shot at multiple angles;
calculating the pose and the point cloud of the target building in each picture;
performing combined processing according to the poses and point clouds of the target building corresponding to the multiple pictures to obtain a preliminary three-dimensional model;
performing two-dimensional segmentation processing on the preliminary three-dimensional model to obtain a preset number of two-dimensional slices;
and aligning the two-dimensional slices according to a preset mode to obtain a target three-dimensional model.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an apparatus for reconstructing a model, the device includes: the building model construction system comprises a processor and a memory which are electrically connected with each other, wherein the processor is coupled with the memory and executes instructions to realize the building model construction method in operation, and a processing result generated by the execution instructions is stored in the memory.
In order to solve the technical problem, the other technical scheme adopted by the application is as follows: there is provided an apparatus having a storage function, storing program data which, when executed, implements the method described above.
The beneficial effects of the above technical scheme are: different from the situation of the prior art, the method and the device have the advantages that the target buildings in the multiple pictures shot at multiple angles are identified, the pose and the point cloud of the target buildings in each picture are calculated, the preset combination processing is carried out according to the calculated pose and the point cloud, a preliminary three-dimensional model is obtained, the obtained three-dimensional model is subjected to two-dimensional segmentation processing to obtain the preset number of two-dimensional slices, the two-dimensional slices obtained through segmentation are aligned according to the preset mode to obtain the target three-dimensional model, and finally the reconstruction of the more accurate model is achieved, and the more accurate and smooth model is obtained.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a method for building a model of an object;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a method for building a model of an object;
FIG. 3a is a schematic diagram of the effect of a prior art construction model construction method;
FIG. 3b is a schematic diagram illustrating the effect of another embodiment of the construction method of the present application;
FIG. 4 is a schematic diagram of an embodiment of an apparatus for building a model of an object according to the present application;
fig. 5 is a schematic structural diagram of an embodiment of the apparatus with a storage function according to the present application.
Detailed Description
Hereinafter, exemplary embodiments of the present application will be described with reference to the accompanying drawings. Well-known functions or constructions are not described in detail for clarity and conciseness because they would obscure the application in unnecessary detail. Terms described below, which are defined in consideration of functions in the present application, may be different according to intentions or implementations of users and operators. Therefore, the terms should be defined based on the disclosure of the entire specification.
Please refer to fig. 1, which is a flowchart illustrating an embodiment of a method for constructing a building model according to the present application. It should be noted that, if the results are substantially the same, the method of the present application is not limited to the flow sequence shown in fig. 1, and the other flow diagrams described below are also not limited to the flow sequence shown in the figures. As shown in fig. 1, the method includes steps S10 to S50, in which:
s10: and identifying target buildings in the pictures shot from multiple angles.
Step S10 is to identify a building to be modeled in the acquired and/or received multiple pictures, where the building to be modeled is referred to as a target building, and the multiple pictures are pictures taken from different angles of the target building. It can be understood that the multiple pictures shot from multiple angles may be obtained by shooting through a shooting device in real time, or other pictures transmitted through a network, or pictures stored locally. Each picture contains a building (building of interest) to be modeled, and it should be noted that a plurality of pictures are taken from different angles of the target building.
In one embodiment, the identified picture is a call to a locally stored picture, wherein the identified picture is taken from at least 3 directions of the target building (any three directions of the front, back, left, right, and above the building).
In another embodiment, the identified pictures are taken from more than 5 directions of the target building. Specifically, the shooting direction includes: 5 directions such as front, back, left, right, and above of the building, and directions having a certain deviation angle with respect to the front, back, left, right, or above, such as a direction corresponding to a body diagonal of the building. However, since the shooting direction of the building is various, the shooting angle is not limited at all.
Further, in one embodiment, pixels associated with the target building in each of the identified pictures are labeled for subsequent recall or further processing of the pictures based on the pixels corresponding to the labeled target building.
Further, in an embodiment, after the step S10, the method further includes: and replacing part of pixels except the target building in each picture with pure white.
In one embodiment, the currently recognized picture includes other buildings or backgrounds, etc. in addition to the target building. It is also included after step S10 to replace all the pixels of each picture except the target building with pure white. The method has the advantages that all pixels except the outer part of the target building in each picture are replaced by pure white, so that the calculation and identification amount can be reduced, the calculation amount of the reconstructed model is simplified, the reconstruction accuracy is improved, the size of the reconstructed model file can be further reduced, and the occupied computer memory space is further reduced.
S20: calculate each sheet object in picture pose of the building and point cloud.
Further, after the target buildings in the pictures are identified in the step S10, the pose and the point cloud of the target buildings in each picture are further calculated. The pose of the target building refers to the position and the posture of the target building in a coordinate system, and can also be simply understood as the shooting direction of the current picture. The point cloud of the target building can be understood as a data set of the product viewpoint.
Specifically, step S20 further includes: and calculating the pose and the point cloud of the target building by using a motion recovery structure algorithm. Because the shooting angles of different pictures are different, the poses of the target buildings are different in different pictures. Wherein, the first and the second end of the pipe are connected with each other, the motion structure from motion algorithm automatically restores camera motion or scene structure using two or more scenes, the technology is self-calibration and can automatically complete the tracking and motion matching of the camera. In particular to matching the feature points in two or more adjacent pictures, to get the complete scene structure.
In an embodiment, according to the target buildings in the multiple multi-angle pictures identified in step S10, the feature points of the pictures at adjacent poses are matched by using a motion recovery structure algorithm to recover the same target building shot in multiple angles, so as to obtain a point cloud corresponding to the target building.
S30: and performing combined processing according to the poses of the target building and the point cloud corresponding to the multiple pictures to obtain a preliminary three-dimensional model.
And performing combination processing according to the pose and the point cloud of the target building in the multiple multi-angle pictures obtained by calculation in the step S20 to obtain a preliminary three-dimensional model.
Further, step S30 includes: and calculating by using a multi-view stereoscopic vision algorithm and performing combined processing to obtain a preliminary three-dimensional model according to the poses and point clouds of the target buildings corresponding to the multiple pictures.
And (5) further calculating the point cloud by using a multi-view stereo vision algorithm according to the pose and the point cloud of the target building in the plurality of pictures calculated in the step (S20), and performing combined processing to obtain a preliminary three-dimensional model.
As can be seen from the characteristics of the motion recovery structure algorithm, the point cloud of the target building calculated by the motion recovery algorithm is a sparse point cloud; similarly, according to the characteristics of the multi-view stereo vision algorithm, the point cloud of the target building calculated by the multi-view stereo vision algorithm is dense point cloud. It can be understood that when the multi-view stereo vision algorithm is based on the point cloud corresponding to the target building calculated by the motion recovery algorithm, further calculation is performed again according to the calculated pose relationship, and the obtained primary three-dimensional model is dense point cloud.
S40: and carrying out two-dimensional segmentation treatment on the preliminary three-dimensional model to obtain a preset number of two-dimensional slices.
Further, the obtained preliminary three-dimensional model is subjected to two-dimensional segmentation treatment according to a preset direction, so that two-dimensional slices with a preset number are obtained. The direction of the two-dimensional segmentation of the preliminary three-dimensional model can be a horizontal direction, the preliminary three-dimensional model can also be segmented along a vertical direction, and other directions obtained through specific calculation can also be used for segmenting the preliminary three-dimensional model so as to obtain the two-dimensional slices with preset quantity.
Optionally, in the step of performing two-dimensional segmentation processing on the preliminary three-dimensional model, the number of the set target two-dimensional slices is set according to the required segmentation precision. In an embodiment, when it is required to perform very precise and fine segmentation, a greater number of segmentation times is set to obtain a greater number of two-dimensional slices. In another embodiment, when the preliminary three-dimensional model needs to be subjected to ordinary segmentation, a smaller number of times of segmentation is set to obtain a smaller number of two-dimensional slices.
Further, in an embodiment, the step S40 further includes the steps of sequentially performing two-dimensional segmentation on the preliminary three-dimensional model according to a preset direction to obtain a preset number of two-dimensional slices, and numbering the two-dimensional slices according to a segmentation order during segmentation of the preliminary three-dimensional model. And in the cutting process, numbering the obtained two-dimensional slices in sequence according to the cutting sequence. If the slicing is performed from top to bottom along the horizontal direction, the two-dimensional slices obtained by the slicing are labeled in sequence along the slicing sequence.
Specifically, the numbered content may be "1,2,3 … …," and it is understood that in other embodiments, the numbered content may be set according to the user's needs, for example, it may be set in a manner of combining letters and numbers.
In another embodiment, referring to fig. 2, step S40 includes steps S41 to S43. Wherein, the first and the second end of the pipe are connected with each other,
s41: and detecting straight lines and planes of the preliminary three-dimensional model.
And S41, further detecting the obtained preliminary three-dimensional model according to the point cloud obtained by calculation, and extracting a straight line and a plane in the point cloud. The purpose of extracting the straight line and the plane is mainly to remove the noise caused by noise or other factors, so that the pose estimation and point cloud in the reconstruction process are more accurate. Meanwhile, on the premise of not reducing the quality of the model, the complexity of the point cloud of the target building and the grid structure of the corresponding target three-dimensional structure obtained by subsequent conversion is effectively reduced.
S42: and fitting a plane in the first direction and/or a plane in the second direction according to the straight line and/or the plane.
Step S42 is to further fit a plane in the first direction and/or a plane in the second direction according to the straight line and/or the plane obtained in step S41. The fitting is to calculate a certain preset number of points to obtain a straight line with the minimum sum of squares of distances to the points, and similarly, the fitting of a certain plane means to calculate a plane with the minimum sum of squares of distances to the preset number of straight lines. Wherein the distance is a vertical distance.
Further, points in the point cloud are firstly used for fitting straight lines of a preset number, and a plane in the first direction or a plane in the second direction is further fitted by using the straight lines obtained through fitting. The plane in the first direction is a plane in the horizontal direction, and the plane in the second direction is a plane in the vertical direction, and the fitting is performed as required.
In an embodiment, the plane in the first direction to be fitted is a plane in the horizontal direction, a straight line parallel to the plane in the horizontal direction is fitted first, and the plane in the horizontal direction is further fitted according to the fitted straight line. Similarly, in other embodiments, when it is necessary to fit a plane in a certain direction, a straight line parallel to the desired plane is fitted first, and then the desired plane is fitted.
S43: and performing two-dimensional segmentation on the preliminary three-dimensional model according to the plane in the first direction and/or the plane in the second direction to obtain a preset number of two-dimensional slices.
Step S43 is to perform two-dimensional segmentation on the preliminary three-dimensional model according to the plane in the first direction and/or the plane in the second direction obtained by fitting in step S42, so as to obtain a preset number of two-dimensional slices.
In one embodiment, when the fitting in step S42 results in a plane in the first direction, i.e. a plane in the horizontal direction, step S43 cuts the preliminary three-dimensional model according to the resulting plane in the horizontal direction to obtain a predetermined number of two-dimensional slices.
In another embodiment, when the plane in the second direction, i.e. the plane in the vertical direction, is obtained by the fitting in step S42, step S43 cuts the preliminary three-dimensional model according to the obtained plane in the vertical direction to obtain a predetermined number of two-dimensional slices. It can be understood that, when the preliminary three-dimensional model is sliced along a plane in the vertical direction, and the two-dimensional slices obtained by slicing are subsequently aligned, the alignment process is performed along the original slicing direction.
After the preliminary three-dimensional model is segmented, each two-dimensional slice obtained through segmentation is normalized to obtain a calibrated two-dimensional slice. Therein, a nominal two-dimensional slice is typically a two-dimensional picture of some polygons. The normalization process is to further process the two-dimensional slices obtained in the above steps to remove point outliers or outliers, and to complement the missing due to other factors (including noise, etc.), so as to obtain a more accurate polygon (normalized two-dimensional slice). Specifically, please refer to fig. 3a and fig. 3b, which are schematic diagrams illustrating the effect of the normalization process. Fig. 3a shows a two-dimensional slice without normalization, and it can be known from fig. 3a that there is a defect in the obtained two-dimensional slice due to noise and other variable factors, and the edge portion of the obtained two-dimensional slice is uneven. After normalization, a two-dimensional slice (polygon) with straighter lines and more accurate included angles is obtained as shown in fig. 3b, wherein after normalization, the missing parts are filled up. In summary, the normalization process models the two-dimensional slice as a polygon through the extracted straight line portions. In this process, the following are involved: in the process of fitting the straight line, removing the miscellaneous points, filling the missing part of the straight line in the polygonal structure, or adjusting the included angle more accurately.
S50: and aligning the two-dimensional slices according to a preset mode to obtain a target three-dimensional model.
Further, in step S50, the two-dimensional slices are aligned in a growing alignment manner to obtain a target three-dimensional model.
In an embodiment, in step S50, the two-dimensional slice obtained after the segmentation is directly aligned in a growing alignment manner to obtain the target three-dimensional model.
In another embodiment, step S50 is specifically to perform alignment processing on the normalized calibrated two-dimensional slice according to a growing alignment manner to obtain a target three-dimensional model. Therein, a nominal two-dimensional slice is typically a two-dimensional picture of some polygons. Further, in step S50, the two-dimensional slice is aligned according to a preset mode, and the two-dimensional slice is taken and processed in the alignment process to obtain a target three-dimensional model.
In one embodiment, after aligning the nominal two-dimensional slice in a growing manner, but there are still some minor errors that are present, at the moment, the error is taken and operated, and the error position or the numerical value is taken and is at most the numerical value or the accurate value as much as possible.
Optionally, in an embodiment, after the step S10 to the step S40 are performed, the method further includes converting the target three-dimensional model into a mesh structure. It can be understood that the transformation into the target three-dimensional model of the grid structure can occupy smaller computer memory, and is more convenient for transmission and analysis. According to the method, target buildings in multiple pictures shot at multiple angles are identified, the pose and point cloud of the target buildings in each picture are calculated, preset combination processing is carried out according to the calculated pose and point cloud, a preliminary three-dimensional model is obtained, two-dimensional segmentation processing is carried out on the obtained three-dimensional model to obtain two-dimensional slices of a preset number, the two-dimensional slices obtained through segmentation are aligned according to a preset mode to obtain a target three-dimensional model, and finally, a more accurate model is reconstructed in the process.
Referring to fig. 4, which is a schematic structural diagram of an embodiment of an apparatus 10 for reconstructing a model according to the present application, the apparatus 10 includes a processor 12 and a memory 14 that are electrically connected to each other, the processor 12 is coupled to the memory 14, and the processor 12 executes instructions to implement the building model building method as described above when operating, and stores processing results generated by the executed instructions in the memory 14.
In an embodiment, the device 10 for rebuilding model may be, but is not limited to, a mobile phone, a notebook computer, a tablet computer with communication and networking functions, or other devices with the rebuilding model function.
Referring to fig. 5, which is a schematic structural diagram of an embodiment of the apparatus 20 with a storage function according to the present application, the storage apparatus 20 stores program data, and the program data stored in the storage apparatus implements the building model constructing method described above when executed. Specifically, the apparatus 20 with a storage function may be one of a memory of a terminal device, a personal computer, a server, a network device, or a usb disk. The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A building model construction method is characterized by comprising the following steps:
identifying target buildings in a plurality of pictures shot at multiple angles;
calculating the pose and the point cloud of the target building in each picture;
performing combined processing according to the poses and point clouds of the target building corresponding to the multiple pictures to obtain a preliminary three-dimensional model;
performing two-dimensional segmentation processing on the preliminary three-dimensional model to obtain a preset number of two-dimensional slices; the two-dimensional segmentation processing of the preliminary three-dimensional model to obtain a preset number of two-dimensional slices comprises the following steps: detecting straight lines and planes of the preliminary three-dimensional model; fitting a plane in the first direction and/or a plane in the second direction according to the straight line and/or the plane; performing two-dimensional segmentation on the preliminary three-dimensional model according to the plane in the first direction and/or the plane in the second direction to obtain a preset number of two-dimensional slices;
aligning the two-dimensional slices according to a preset mode to obtain a target three-dimensional model,
modeling an object in three dimensions converted into a lattice structure.
2. The building model construction method of claim 1, wherein the step of identifying the target building in the plurality of multi-angle pictures further comprises: and replacing part of pixels except the target building in each picture with pure white.
3. The building model construction method according to claim 1, wherein the step of calculating the pose and point cloud of the target building in each of the pictures specifically comprises:
and calculating the pose and the point cloud of the target building by utilizing a motion recovery structure algorithm.
4. The building model construction method according to claim 1, wherein the step of performing combined processing according to the poses of the target building and the point cloud corresponding to the plurality of pictures to obtain a preliminary three-dimensional model comprises:
and calculating by using a multi-view stereoscopic vision algorithm and performing combined processing to obtain a preliminary three-dimensional model according to the poses and the point clouds of the target building corresponding to the multiple pictures.
5. The method for constructing a building model according to claim 1, wherein the step of performing two-dimensional segmentation on the preliminary three-dimensional model to obtain a preset number of two-dimensional slices further comprises:
and sequentially carrying out two-dimensional segmentation processing on the preliminary three-dimensional model according to a preset direction to obtain a preset number of two-dimensional slices, and numbering the two-dimensional slices according to the segmentation sequence.
6. The construction model building method according to claim 5, and normalizing the preset number of two-dimensional slices to obtain the calibrated two-dimensional slices.
7. The building model construction method according to claim 1, wherein the step of aligning the two-dimensional slices in a preset manner to obtain the target three-dimensional model specifically comprises:
and sequentially aligning the two-dimensional slices in a growing mode to obtain a target three-dimensional model.
8. The building model construction method according to claim 7, wherein the step of aligning the two-dimensional slices in a preset manner to obtain the target three-dimensional model specifically comprises:
and aligning the two-dimensional slices according to a preset mode, and taking and processing the two-dimensional slices in the aligning process to obtain a target three-dimensional model.
9. An apparatus for reconstructing a model, comprising a processor and a memory electrically connected to each other, the processor being coupled to the memory and being configured to execute instructions to implement the method according to any one of claims 1 to 8 when the processor is in operation and to store processing results generated by the execution of the instructions in the memory.
10. An apparatus having a storage function, characterized in that program data are stored which, when executed, implement the method according to any one of claims 1 to 8.
CN201711498685.7A 2017-12-29 2017-12-29 Building model construction method and device and storage device Active CN109992809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711498685.7A CN109992809B (en) 2017-12-29 2017-12-29 Building model construction method and device and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711498685.7A CN109992809B (en) 2017-12-29 2017-12-29 Building model construction method and device and storage device

Publications (2)

Publication Number Publication Date
CN109992809A CN109992809A (en) 2019-07-09
CN109992809B true CN109992809B (en) 2023-03-10

Family

ID=67110248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711498685.7A Active CN109992809B (en) 2017-12-29 2017-12-29 Building model construction method and device and storage device

Country Status (1)

Country Link
CN (1) CN109992809B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648396A (en) * 2019-09-17 2020-01-03 西安万像电子科技有限公司 Image processing method, device and system
CN111090905A (en) * 2019-12-23 2020-05-01 陕西理工大学 Mathematical modeling method using computer multidimensional space
CN111882977B (en) * 2020-05-06 2022-04-29 北京嘀嘀无限科技发展有限公司 High-precision map construction method and system
CN111640180B (en) * 2020-08-03 2020-11-24 深圳市优必选科技股份有限公司 Three-dimensional reconstruction method and device and terminal equipment
CN112530005B (en) * 2020-12-11 2022-10-18 埃洛克航空科技(北京)有限公司 Three-dimensional model linear structure recognition and automatic restoration method
CN112765709B (en) * 2021-01-15 2022-02-01 贝壳找房(北京)科技有限公司 House type graph reconstruction method and device based on point cloud data
CN113654527B (en) * 2021-09-02 2024-05-07 宁波九纵智能科技有限公司 Construction site panoramic management and control display method and system based on Beidou positioning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093498A (en) * 2013-01-25 2013-05-08 西南交通大学 Three-dimensional human face automatic standardization method
CN103744085A (en) * 2014-01-17 2014-04-23 哈尔滨工程大学 Underwater robot five component ranging sonar inclined shaft three dimensional imaging system and imaging method
CN106709947A (en) * 2016-12-20 2017-05-24 西安交通大学 RGBD camera-based three-dimensional human body rapid modeling system
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN107146280A (en) * 2017-05-09 2017-09-08 西安理工大学 A kind of point cloud building method for reconstructing based on cutting

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093498A (en) * 2013-01-25 2013-05-08 西南交通大学 Three-dimensional human face automatic standardization method
CN103744085A (en) * 2014-01-17 2014-04-23 哈尔滨工程大学 Underwater robot five component ranging sonar inclined shaft three dimensional imaging system and imaging method
CN106709947A (en) * 2016-12-20 2017-05-24 西安交通大学 RGBD camera-based three-dimensional human body rapid modeling system
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN107146280A (en) * 2017-05-09 2017-09-08 西安理工大学 A kind of point cloud building method for reconstructing based on cutting

Also Published As

Publication number Publication date
CN109992809A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109992809B (en) Building model construction method and device and storage device
CN107077744B (en) Method and system for three-dimensional model generation using edges
CN110310326B (en) Visual positioning data processing method and device, terminal and computer readable storage medium
US8447099B2 (en) Forming 3D models using two images
US8452081B2 (en) Forming 3D models using multiple images
KR101310589B1 (en) Techniques for rapid stereo reconstruction from images
JP2018536915A (en) Method and system for detecting and combining structural features in 3D reconstruction
US10380796B2 (en) Methods and systems for 3D contour recognition and 3D mesh generation
US11315313B2 (en) Methods, devices and computer program products for generating 3D models
CN115409933B (en) Multi-style texture mapping generation method and device
CN112270736B (en) Augmented reality processing method and device, storage medium and electronic equipment
CN113362314B (en) Medical image recognition method, recognition model training method and device
CN114067051A (en) Three-dimensional reconstruction processing method, device, electronic device and storage medium
CN110942102B (en) Probability relaxation epipolar matching method and system
CN117274605B (en) Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
US20240087231A1 (en) Method, apparatus, computer device and storage medium for three-dimensional reconstruction of indoor structure
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
US20230260211A1 (en) Three-Dimensional Point Cloud Generation Method, Apparatus and Electronic Device
CN113822097A (en) Single-view human body posture recognition method and device, electronic equipment and storage medium
CN115375823B (en) Three-dimensional virtual clothing generation method, device, equipment and storage medium
CN115937002B (en) Method, apparatus, electronic device and storage medium for estimating video rotation
CN108151712B (en) Human body three-dimensional modeling and measuring method and system
CN110135340A (en) 3D hand gestures estimation method based on cloud
CN116246038B (en) Multi-view three-dimensional line segment reconstruction method, system, electronic equipment and medium
CN116188497B (en) Method, device, equipment and storage medium for optimizing generation of DSM (digital image model) of stereo remote sensing image pair

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 22nd floor, building C1, Nanshan wisdom Park, 1001 Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen UBTECH Technology Co.,Ltd.

Address before: 22nd floor, building C1, Nanshan wisdom Park, 1001 Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: Shenzhen UBTECH Technology Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231205

Address after: Room 601, 6th Floor, Building 13, No. 3 Jinghai Fifth Road, Beijing Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100176

Patentee after: Beijing Youbixuan Intelligent Robot Co.,Ltd.

Address before: 22nd floor, building C1, Nanshan wisdom Park, 1001 Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: Shenzhen UBTECH Technology Co.,Ltd.