CN111369678A - Three-dimensional scene reconstruction method and system - Google Patents

Three-dimensional scene reconstruction method and system Download PDF

Info

Publication number
CN111369678A
CN111369678A CN201811587475.XA CN201811587475A CN111369678A CN 111369678 A CN111369678 A CN 111369678A CN 201811587475 A CN201811587475 A CN 201811587475A CN 111369678 A CN111369678 A CN 111369678A
Authority
CN
China
Prior art keywords
point cloud
cloud data
frame
depth point
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811587475.XA
Other languages
Chinese (zh)
Inventor
洪悦
朱兴霞
毛卫柱
张严严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sunny Optical Intelligent Technology Co Ltd
Original Assignee
Zhejiang Sunny Optical Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sunny Optical Intelligent Technology Co Ltd filed Critical Zhejiang Sunny Optical Intelligent Technology Co Ltd
Priority to CN201811587475.XA priority Critical patent/CN111369678A/en
Publication of CN111369678A publication Critical patent/CN111369678A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The three-dimensional scene reconstruction method comprises the steps of obtaining first frame depth point cloud data of a scene to be reconstructed; constructing a depth pyramid corresponding to the first frame of depth point cloud data; converting the first frame of depth point cloud data into a volumetric representation; acquiring a second frame of depth point cloud data of the scene to be reconstructed; constructing a depth pyramid corresponding to the second frame of depth point cloud data; splicing the first frame depth point cloud data and the second frame depth point cloud data to obtain the global pose when the TOF camera module acquires the second frame depth point cloud data; converting the second frame depth point cloud data into a volumetric representation; fusing a volumetric representation of the first frame depth point cloud data and the second frame depth point cloud data; and triangularizing the fusion representation of the scene to be reconstructed based on the volume to obtain a three-dimensional model of the scene to be reconstructed under the visual angle of the second frame of depth point cloud data acquired by the TOF camera module.

Description

Three-dimensional scene reconstruction method and system
Technical Field
The invention relates to the field of computers, in particular to a three-dimensional scene reconstruction method and a three-dimensional scene reconstruction system.
Background
In recent years, with the rise of computer vision and robotics, real-time three-dimensional reconstruction technology has been widely used in the fields of three-dimensional modeling, ar (augmented reality), vr (visual reality), mr (mixed reality), and slam (simulation) as a research hotspot.
The conventional three-dimensional real-time reconstruction technique usually adopts a conventional classical algorithm sfm (structure from motion) as a basic algorithm. The SFM algorithm adopts a two-dimensional image as input, usually searches for characteristic points on the two-dimensional image to perform interframe matching, and solves the change of the pose of a camera by minimizing a reprojection error so as to construct a three-dimensional model. The SFM algorithm is influenced by characteristic visibility and completeness due to the fact that a two-dimensional image is used as input, certain three-dimensional information is lost, and the accuracy of three-dimensional reconstruction is greatly limited. And the calculation speed of the processes of subsequent screening pairing, pose solving and accumulated error processing by adopting the SFM algorithm is slow, and real-time three-dimensional reconstruction is difficult to achieve.
In order to solve the defect of low three-dimensional reconstruction precision of the traditional three-dimensional real-time reconstruction technology, one solution is to use a high-precision three-dimensional laser scanner as input, but the high-precision three-dimensional laser scanner is expensive and high in cost, and is not beneficial to large-scale popularization and use.
With continuous discovery and development of computer application technology, the three-dimensional real-time reconstruction technology is bound to be more widely applied along with development of computer application, so that the technical problem that how to solve the problems that the traditional three-dimensional reconstruction technology is low in reconstruction accuracy and low in reconstruction speed and cannot reconstruct in real time becomes urgent to be solved by the three-dimensional real-time reconstruction technology.
Disclosure of Invention
The invention aims to provide a three-dimensional scene reconstruction method and a three-dimensional scene reconstruction system, wherein the three-dimensional scene reconstruction method is used for reconstructing a three-dimensional scene based on a depth image, and the reconstructed scene precision is high.
Another object of the present invention is to provide a three-dimensional scene reconstruction method and system thereof, wherein the three-dimensional scene reconstruction method performs three-dimensional reconstruction based on depth images, and the scene reconstruction speed is fast.
Another objective of the present invention is to provide a three-dimensional scene reconstruction method and system, wherein the three-dimensional scene reconstruction method can construct a depth pyramid for depth point cloud data, and can increase the processing speed of subsequent depth point cloud data and increase the three-dimensional scene reconstruction speed.
Another object of the present invention is to provide a three-dimensional scene reconstruction method and system thereof, wherein the three-dimensional scene reconstruction method can calculate the pose of a camera from coarse to fine, and can accelerate the calculation of the three-dimensional reconstruction speed while ensuring the three-dimensional reconstruction accuracy.
Another object of the present invention is to provide a three-dimensional scene reconstruction method and system thereof, wherein the three-dimensional scene reconstruction method uses all depth information for camera tracking, and can complete high-quality geometric surface reconstruction in a variable illumination environment.
Another object of the present invention is to provide a three-dimensional scene reconstruction method and system thereof, wherein the three-dimensional scene reconstruction method uses a ray projection algorithm to perform real-time display of a three-dimensional model, and can process a part of dynamically changing environments and provide interactive display of any viewing angle.
Another object of the present invention is to provide a three-dimensional scene reconstruction method and system thereof, wherein the three-dimensional scene reconstruction method uses a ray projection algorithm to perform real-time display of a three-dimensional model, and can handle interaction with a virtual object, a virtual optical fiber, a foreground character, etc.
Another object of the present invention is to provide a three-dimensional scene reconstruction method and system thereof, wherein the three-dimensional scene reconstruction method is capable of performing real-time three-dimensional reconstruction on a fast moving object.
The invention also aims to provide a three-dimensional scene reconstruction method and a system thereof, wherein the three-dimensional scene reconstruction method adopts a TSDF representation method to splice depth point cloud data, and has the advantages of low noise, high data fusion quality and high point cloud splicing speed.
Another object of the present invention is to provide a three-dimensional scene reconstruction method and system thereof, wherein a user of a scene reconstructed by the three-dimensional scene reconstruction method can save the spliced scene as a three-dimensional file at any time.
Another object of the present invention is to provide a three-dimensional scene reconstruction method and system thereof, wherein a user can perform real-time interactive operation with a three-dimensional reconstruction model.
Another object of the present invention is to provide a method and a system for reconstructing a three-dimensional scene, wherein the method is simple and easy to implement, and has low economic cost.
Another object of the present invention is to provide a method and a system for reconstructing a three-dimensional scene, wherein the system for reconstructing a scene based on depth images has a simple structure, is easy to implement, and has a low economic cost.
Accordingly, to achieve at least one of the above objects, the present invention further provides a three-dimensional scene reconstruction method, including:
acquiring first frame depth point cloud data of a scene to be reconstructed through a TOF camera module;
preprocessing the first frame of depth point cloud data to construct a depth pyramid corresponding to the first frame of depth point cloud data;
processing the preprocessed first frame depth point cloud data with a truncated symbolic distance function to convert the first frame depth point cloud data into a volumetric representation;
acquiring second frame depth point cloud data of the scene to be reconstructed through the TOF camera module;
preprocessing the second frame depth point cloud data to construct a depth pyramid corresponding to the second frame depth point cloud data;
performing point cloud splicing on the first frame of depth point cloud data and the second frame of depth point cloud data based on a depth pyramid corresponding to the first frame of depth point cloud data and a depth pyramid corresponding to the second frame of depth point cloud data to obtain a global pose when the TOF camera module acquires the second frame of depth point cloud data;
processing the preprocessed second frame depth point cloud data by using a truncated symbol distance function so as to convert the second frame depth point cloud data into volume representation;
fusing the first frame depth point cloud data represented based on the volume and the second frame depth point cloud data represented based on the volume to obtain fused representation of the scene to be reconstructed based on the volume when the TOF camera module acquires the second frame depth point cloud data; and
and responding to the stopping of the TOF camera module to scan the scene to be reconstructed, and carrying out triangularization treatment on the fusion representation of the scene to be reconstructed based on the volume so as to obtain a three-dimensional model of the scene to be reconstructed under the visual angle of the second frame of depth point cloud data acquired by the TOF camera module.
According to an embodiment of the present invention, the merging the first frame depth point cloud data based on volume representation and the second frame depth point cloud data based on volume representation in the step to obtain the volume-based merged representation of the scene to be reconstructed when the TOF camera module acquires the second frame depth point cloud data, and triangularizing the volume-based merged representation of the scene to be reconstructed in response to stopping the TOF camera module from scanning the scene to be reconstructed to obtain the three-dimensional model of the scene to be reconstructed under the viewing angle of the TOF camera module acquiring the second frame depth point cloud data further include:
acquiring third frame depth point cloud data of the scene to be reconstructed through the TOF camera module;
preprocessing the third frame depth point cloud data to construct a depth pyramid corresponding to the third frame depth point cloud data;
performing point cloud splicing on the second frame depth point cloud data and the third frame depth point cloud data based on a depth pyramid corresponding to the second frame depth point cloud data and a depth pyramid corresponding to the third frame depth point cloud data to obtain a global pose when the TOF camera module acquires the third frame depth point cloud data;
processing the preprocessed third frame point cloud data by a truncated symbol distance function so as to convert the third frame depth point cloud data into volume representation; and
and fusing the third frame depth point cloud data based on volume representation and the volume-based fused representation generated by fusing the first frame depth point cloud data and the second frame depth point cloud data based on volume representation to obtain the volume-based fused representation of the scene to be reconstructed when the TOF camera module acquires the third frame depth point cloud.
According to an embodiment of the present invention, the three-dimensional reconstruction method further includes: and based on the current pose of the TOF camera module, carrying out ray projection processing on the fusion representation of the scene to be reconstructed based on the volume so as to generate real-time display data of the scene to be reconstructed.
According to an embodiment of the present invention, before constructing the depth pyramid corresponding to the first frame of depth point cloud data, the second frame of depth point cloud data, and the third frame of depth point cloud data, further comprising: and carrying out noise reduction processing on the first frame depth point cloud data, the second frame depth point cloud data and the third frame depth point cloud data.
According to one embodiment of the invention, the operating wavelength of the TOF camera module is 850 nm.
According to one embodiment of the invention, the field angle of the TOF camera module is 60 ° (H) 45 ° (V).
According to one embodiment of the invention, the maximum supported frame rate of the TOF camera module is 30 frames/second.
According to another aspect of the present invention, the present invention further provides a three-dimensional scene reconstruction system, comprising:
the TOF camera module can acquire a first frame depth point cloud data and a second frame depth point cloud data of a scene to be reconstructed;
the depth pyramid construction unit is operatively connected to the TOF camera module and can construct a depth pyramid corresponding to the first frame of depth point cloud data and the second frame of depth point cloud data;
a volume representation unit operably connected to the depth pyramid construction unit, the volume representation unit capable of processing the preprocessed first and second frames of depth point cloud data with a truncated symbolic distance function to convert the first and second frames of depth point cloud data into a volume representation;
the point cloud splicing unit is operably connected to the depth pyramid construction unit and can perform point cloud splicing on the first frame of depth point cloud data and the second frame of depth point cloud data based on a depth pyramid corresponding to the first frame of depth point cloud data and the second frame of depth point cloud data so as to obtain the current pose of the TOF camera module;
the volume representation fusion unit is operably connected to the volume representation unit and can fuse the first frame depth point cloud data and the second frame depth point cloud data based on volume representation so as to obtain a volume-based fusion representation of the scene to be reconstructed under the current pose of the TOF camera module; and
and the triangulation processing unit is operably connected to the volume representation fusion unit and can triangulate the volume-based fusion representation of the scene to be reconstructed so as to obtain a three-dimensional model of the scene to be reconstructed in the current pose of the TOF camera module.
According to an embodiment of the invention, the TOF camera module is further capable of acquiring a third frame of depth point cloud data; the depth pyramid construction unit can construct a depth pyramid corresponding to the third frame of depth point cloud data, and the point cloud splicing unit can perform point cloud splicing on the second frame of depth point cloud data and the third frame of depth point cloud data to obtain the current pose of the TOF camera module; the volume representation unit can process the third frame of depth point cloud data of the constructed depth pyramid by a truncated symbolic distance function so as to convert the third frame of depth point cloud data into volume representation; the volume representation fusion unit can fuse the third frame depth point cloud data based on volume representation and a volume-based fusion representation generated by fusing the first frame depth point cloud data and the second frame depth point cloud data based on volume representation to obtain a volume-based fusion representation of the scene to be reconstructed by the TOF camera module under the current pose, and the triangularization processing unit can triangulate the volume-based fusion representation of the scene to be reconstructed to obtain a three-dimensional model of the scene to be reconstructed by the TOF camera module under the current pose.
According to an embodiment of the present invention, the three-dimensional scene reconstruction system further includes a light projection unit, the light projection unit is operatively connected to the volume representation fusion unit, and the light projection unit is capable of performing light projection on the volume-based fusion representation of the scene to be reconstructed to generate real-time display data of the scene to be reconstructed.
According to an embodiment of the present invention, the three-dimensional scene reconstruction system further includes a noise reduction unit, where the noise reduction unit is operably connected to the TOF camera module and the depth pyramid construction unit, respectively, and the noise reduction unit is capable of performing noise reduction on the first frame depth point cloud data, the second frame depth point cloud data, and the third frame depth point cloud data before constructing a depth pyramid corresponding to the first frame depth point cloud data, the second frame depth point cloud data, and the third frame depth point cloud data.
Drawings
Fig. 1 is a schematic structural block diagram of a three-dimensional scene reconstruction method according to a preferred embodiment of the present invention.
Fig. 2 is a schematic structural block diagram of a three-dimensional scene reconstruction method according to a preferred embodiment of the present invention.
Fig. 3 is a schematic block diagram of a depth image-based scene reconstruction system according to a preferred embodiment of the present invention.
Fig. 4 is a schematic flow chart diagram of a three-dimensional scene reconstruction method according to a preferred embodiment of the present invention.
Fig. 5 is a schematic diagram of an application of a depth image-based scene reconstruction system according to a preferred embodiment of the present invention.
Fig. 6 is a schematic diagram of an application of the three-dimensional scene reconstruction method according to a preferred embodiment of the present invention.
Fig. 7 is a schematic diagram of an application of the three-dimensional scene reconstruction method according to a preferred embodiment of the present invention.
Fig. 8 is a schematic diagram of an application of the three-dimensional scene reconstruction method according to a preferred embodiment of the present invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that in the present disclosure, the terms "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for ease of description and simplicity of description, and do not indicate or imply that the referenced devices or components must be in a particular orientation, constructed and operated in a particular orientation, and thus the above terms are not to be construed as limiting the present invention.
It is understood that the terms "a" and "an" should be interpreted as meaning that a number of one element or element is one in one embodiment, while a number of other elements is one in another embodiment, and the terms "a" and "an" should not be interpreted as limiting the number.
With reference to fig. 1 to 2, a depth image-based three-dimensional scene reconstruction method provided by the present invention is illustrated. The three-dimensional reconstruction method based on the depth image comprises the following steps
101: acquiring a first frame depth point cloud data 11 of a scene 200 to be reconstructed through a TOF camera module 10;
102: preprocessing the first frame depth point cloud data 11 to construct a depth pyramid corresponding to the first frame depth point cloud data 11;
103: processing the preprocessed first frame depth point cloud data 11 with a truncated symbol distance function to convert the first frame depth point cloud data 11 into a volumetric representation;
104: acquiring a second frame depth point cloud data 12 of the scene 200 to be reconstructed through the TOF camera module 10;
105: preprocessing the second frame depth point cloud data 12 to construct a depth pyramid corresponding to the second frame depth point cloud data 12;
106: performing point cloud splicing on the first frame depth point cloud data 11 and the second frame depth point cloud data 12 based on a depth pyramid corresponding to the first frame depth point cloud data 11 and a depth pyramid corresponding to the second frame depth point cloud data 12 to obtain a global pose when the TOF camera module 10 acquires the second frame depth point cloud data 12;
107: processing the preprocessed second frame depth point cloud data 12 with a truncated symbol distance function to convert the second frame depth point cloud data 12 into a volumetric representation;
108: fusing the first frame depth point cloud data 11 represented based on volume and the second frame depth point cloud data 12 represented based on volume to obtain a fused representation of the scene 200 to be reconstructed based on volume when the TOF camera module 10 acquires the second frame depth point cloud data 12; and
109: and responding to the stopping of the TOF camera module to scan the scene to be reconstructed, and performing triangulation processing on the volume-based fusion representation of the scene 200 to be reconstructed so as to obtain a three-dimensional model of the scene 200 to be reconstructed under the visual angle of the TOF camera module 10 acquiring the second frame depth point cloud data 12.
Referring to fig. 2, the three-dimensional scene reconstruction method further includes the following steps between step 108 and step 109:
201: acquiring a third frame depth point cloud data 13 of the scene 200 to be reconstructed through the TOF camera module 10;
202: preprocessing the third frame depth point cloud data 13 to form a depth pyramid corresponding to the third frame depth point cloud data 13;
203: performing point cloud splicing on the second frame depth point cloud data 12 and the third frame depth point cloud data 13 based on a depth pyramid corresponding to the second frame depth point cloud data 12 and a depth pyramid corresponding to the third frame depth point cloud data 13 to obtain a global pose when the TOF camera module 10 acquires the third frame depth point cloud data 13;
204: processing the preprocessed third frame depth point cloud data 13 by a truncated symbol distance function to convert the third frame depth point cloud data 13 into a volume representation; and
205: and fusing the third frame depth point cloud data 13 represented by the volume and the fused representation generated by fusing the first frame depth point cloud data 11 represented by the volume and the second frame depth point cloud data 12 represented by the volume so as to obtain the fused representation of the scene to be reconstructed based on the volume when the TOF camera module 10 acquires the third frame depth point cloud.
According to an embodiment of the present invention, the method for reconstructing a three-dimensional scene further comprises the following steps:
301: the volume-based fused representation of the scene 200 to be reconstructed is subjected to a projection process to obtain real-time display data of the scene 200 to be reconstructed.
According to an embodiment of the present invention, before constructing the depth pyramid corresponding to the first frame depth point cloud data 11, the second frame depth point cloud data 12, and the third frame depth point cloud data 13, the method for reconstructing a three-dimensional scene further includes: and performing noise reduction processing on the first frame depth point cloud data 11, the second frame depth point cloud data 12 and the third frame depth point cloud data 13.
According to an embodiment of the invention, the model of the TOF camera module adopted in the three-dimensional scene reconstruction method is Mars 05. Mars05 adopts the latest TOF scheme, and the adopted sensors are 1/1.4 inch CMOS (depth) and 1/6 inch CMOS (color), and support 640 x 480 high-precision point cloud output (the error is within 1 percent) and 1920 x 1080 color output; the frame rate is up to 30 frames/second, and 3D dynamic capture data output can be processed in real time; the TOF field angle is 60 degrees (H) 45 degrees (V), the RGB field angle is 81.86 degrees (H) 52 degrees (V), the working wavelength is 850nmVCSEL, the detection range is 0.1m-6m, a six-degree-of-freedom IMU sensor ICM20690 is arranged in the sensor, the non-glare highlight interference resistance is realized, and the use environment is wide; meanwhile, a Mars05 module carries a Movidius platform, a processing chip is MA2150, power supply and data transmission are carried out through a standard USB3.0/2.0 interface, the operating system supports Linux Ubuntu14.04, Windows7/8/10X86/X64 and android4.3 or higher, the size of the equipment is 22mm (width) 85mm (length) 17.4mm (height), and the device has the advantages of small size, low power consumption, convenience in use and popularization and the like.
In the three-dimensional scene reconstruction method provided by the invention, before the first frame depth point cloud data 11, the second frame depth point cloud data 12 and the third frame depth point cloud data 13 are constructed into corresponding depth pyramids, the first frame depth point cloud data 11, the second frame depth point cloud data 12 and the third frame depth point cloud data 13 are subjected to noise reduction processing, so that the first frame depth point cloud data 11, the second frame depth point cloud data 12 and the third frame depth point cloud data 13 can be conveniently subjected to subsequent processing, and the three-dimensional scene reconstruction speed is improved.
In the three-dimensional scene reconstruction method provided by the invention, the corresponding depth pyramids are respectively constructed for the first frame depth point cloud data 11, the second frame depth point cloud data 12 and the third frame depth point cloud data 13, so that the pose of the TOF camera module 10 can be calculated from coarse to fine, and the three-dimensional scene reconstruction precision is accelerated while the high three-dimensional reconstruction precision is ensured.
In the three-dimensional scene reconstruction method provided by the present invention, the preprocessed first frame depth point cloud data 11, second frame depth point cloud data 12, and third frame depth point cloud data 13 are processed by a truncated symbolic Distance function (tsdfrounded signaled Distance Functions) to obtain volume representations corresponding to the first frame depth point cloud data 11, the second frame depth point cloud data 12, and the third frame depth point cloud data 13. Compared with the traditional point cloud or MESH representation method, the volume representation of the depth point cloud data by the truncated symbol Distance function can perform data fusion with smaller noise and higher quality, and has higher calculation speed compared with the SDF (signed Distance functions) representation.
In step 109 of the three-dimensional scene reconstruction method provided by the present invention, in response to stopping the TOF camera module 10 from scanning the scene 200 to be reconstructed, triangularization is performed on the volume-based fused representation of the scene 200 to be reconstructed to obtain a three-dimensional model of the scene 200 to be reconstructed in the current pose of the TOF camera module 10, tracking of the TOF camera module is performed by using all depth information, and triangulation of the model is performed by using a wavefront method, so that high-quality geometric surface reconstruction can be completed in a changeable light environment, and particularly high-quality geometric surface reconstruction can be completed in an indoor changeable illumination environment. It should be noted that, after the scanning information of the TOF camera module 10 on the scene 200 to be reconstructed is acquired and stopped, the volume-based fusion of the scene 200 to be reconstructed is triangulated, so as to generate a three-dimensional model of the scene 200 to be reconstructed. Stopping the TOF camera 10 from scanning the scene 200 to be reconstructed includes, but is not limited to, a user's saving model operation.
In step 301 of the three-dimensional scene reconstruction method provided by the present invention, the volume-based fusion representation of the scene 200 to be reconstructed is subjected to projection processing to obtain real-time display data of the scene 200 to be reconstructed, a ray projection algorithm is used to perform real-time display of a three-dimensional model, and a partially dynamically changing environment can be processed, for example, a user moves in front of the scene 200 to be reconstructed to provide interactive display at any viewing angle, or interaction with a virtual object, a virtual ray, a foreground person, and the like can be processed.
According to an embodiment of the present invention, in the step 106, based on a depth pyramid corresponding to the first frame depth Point cloud data 11 and a depth pyramid corresponding to the second frame depth Point cloud data 12, Point cloud stitching is performed on the first frame depth Point cloud data 11 and the second frame depth Point cloud data 12 to obtain a global pose when the TOF camera module 10 acquires the second frame depth Point cloud data 12, and an ICP (Iterative Closest Point algorithm) is adopted to calculate the global pose when the TOF camera module 10 acquires the second frame depth Point cloud data 12 by combining the global pose of the TOF camera module 10 when the TOF camera module 10 captures the first frame depth Point cloud data 11, the volume representation of the first frame depth Point cloud data 11, and the second frame depth Point cloud data 12.
Similarly, in the step 203, based on the depth pyramid corresponding to the second frame depth point cloud data 12 and the depth pyramid corresponding to the third frame depth point cloud data 13, point cloud registration is performed on the second frame depth point cloud data 12 and the third frame depth point cloud data 13, so as to obtain the global pose of the TOF camera module when the TOF camera module acquires the third frame depth point cloud data 13, and the global pose of the TOF camera module 10 when the TOF camera module 10 acquires the third frame depth point cloud data 13 is calculated by combining the global pose of the TOF camera module 10 when the second frame depth point cloud data 12 is shot, the second frame depth point cloud data 12, and the third frame depth point cloud data 13 through an ICP algorithm.
It should be noted that, in the preferred embodiment, it is assumed that the global pose of the TOF camera module 10 when acquiring the first frame of depth point cloud data 11 is an origin in a global coordinate system, so as to obtain therefrom the global pose of the TOF camera module 10 when acquiring subsequent depth point cloud data.
According to an embodiment of the present invention, in the step 108, the first frame depth point cloud data 11 based on the volume representation and the second frame depth point cloud data 12 based on the volume representation are fused to obtain a fused volume-based representation of the scene 20 to be reconstructed when the TOF camera module 10 acquires the second frame depth point cloud data 12.
According to an embodiment of the present invention, in the step 205, the third frame depth point cloud data 13 based on the volume representation and the volume-based fusion representation generated by fusing the first frame depth point cloud data 11 and the second frame depth point cloud data 12 based on the volume representation are fused to obtain a volume-based fusion representation of the scene 200 to be reconstructed when the TOF camera module 10 acquires the third frame depth point cloud data 13, that is, the volume representation of the third frame depth point cloud data 13 is fused with the volume representation of the first frame depth point cloud data 11 and the volume representation of the second frame depth point cloud data 12, so as to obtain a volume-based fusion representation of the scene 200 to be reconstructed when the TOF camera module 10 acquires the third frame depth point cloud data 13.
Referring to fig. 4, which shows a schematic flow structure diagram of the three-dimensional scene reconstruction method provided by the present invention, when a user needs to reconstruct a three-dimensional scene, the user first connects the TOF camera module 10, then obtains depth point cloud data through the TOF camera module 10, performs noise reduction on the obtained depth point cloud data, constructs a depth pyramid corresponding to the depth point cloud data, performs volume representation on the obtained depth point cloud data, then determines whether the obtained depth point cloud data is first frame depth point cloud data, and when the obtained depth point cloud data is determined to be first frame depth point cloud data, constructs a global coordinate system by using a global position and pose of the first frame depth point cloud data 11 obtained by the TOF camera module 10 as a coordinate origin; when the acquired depth point cloud data is judged not to be the first frame of depth point cloud data, point cloud splicing is carried out on the depth point cloud data which is not the first frame of depth point cloud data and the depth point cloud data of the previous frame of depth point cloud data to obtain the global coordinates of the depth point cloud data, then the depth point cloud data and the volume representation of the previous depth point cloud data or the fusion volume representation of the previous depth point cloud data are fused to obtain the fusion volume representation when the TOF camera module 10 acquires the depth point cloud data, and then light projection operation is carried out to obtain real-time display of the three-dimensional reconstruction scene of the scene 200 to be reconstructed.
It should be noted that, in the three-dimensional scene reconstruction method provided by the present invention, a user can perform a key operation at any time to store the reconstructed three-dimensional model of the scene 200 to be reconstructed as a three-dimensional file at any time. When the user is judged to perform the key saving operation, triangularization is performed on the fused volume representation to obtain the three-dimensional scene reconstruction model of the scene 200 to be reconstructed, the three-dimensional reconstruction model of the scene 200 to be reconstructed can be saved in a three-dimensional file format, and when the user does not perform the key operation, the TOF camera module 10 continues to acquire the depth point cloud of the scene 200 to be reconstructed.
Referring to fig. 3, according to another aspect of the present invention, in order to implement the three-dimensional scene reconstruction method provided by the present invention, the present invention further provides a three-dimensional scene reconstruction system. The three-dimensional scene reconstruction system comprises a TOF camera module 10, a depth pyramid construction unit 20, a volume representation unit 30, a point cloud splicing unit 40, a volume representation fusion unit 50 and a triangulation processing unit 60, wherein the TOF camera module 10 can acquire first frame depth point cloud data 11 and second frame depth point cloud data 12 of a scene 200 to be reconstructed; the depth pyramid constructing unit 20 is operably connected to the TOF camera module 10, and the depth pyramid constructing unit 20 can construct a depth pyramid corresponding to the first frame of depth point cloud data 11 and the second frame of depth point cloud data 12; the volume representation unit 30 is operably connected to the depth pyramid construction unit 20, and the volume representation unit 30 can process the preprocessed first frame depth point cloud data 11 and the preprocessed second frame depth point cloud data 12 by using a truncated symbolic distance function to convert the first frame depth point cloud data 11 and the second frame depth point cloud data 12 into a volume representation; the point cloud splicing unit 40 is operably connected to the depth pyramid constructing unit 20, and the point cloud splicing unit 40 can perform point cloud splicing on the first frame depth point cloud data 11 and the second frame depth point cloud data 12 based on a depth pyramid corresponding to the first frame depth point cloud data 11 and the second frame depth point cloud data 12, so as to obtain a global pose when the TOF camera module 10 acquires the second frame depth point cloud data 12; the volume representation fusion unit 50 is operably connected to the volume representation unit 30, and the volume representation fusion unit 50 can fuse the first frame depth point cloud data 11 and the second frame depth point cloud data 12 based on volume representation to obtain a volume-based fusion representation of the scene 200 to be reconstructed when the TOF camera module 10 acquires the second frame depth point cloud 12; the triangulation processing unit 60 is operably connected to the volume representation fusion unit 50, and the triangulation processing unit 60 can triangulate the volume-based fusion representation of the scene 200 to be reconstructed, so as to obtain a three-dimensional model of the scene 200 to be reconstructed when the TOF camera module 10 acquires the global pose of the second frame of depth point cloud data 12.
According to an embodiment of the present invention, the TOF camera module 10 can further obtain a third frame of depth point cloud data 13, the depth pyramid constructing unit 20 can construct a depth pyramid corresponding to the third frame of depth point cloud data 13, and the point cloud stitching unit 40 can perform point cloud stitching on the second frame of depth point cloud data 12 and the third frame of depth point cloud data 13 to obtain a global pose when the TOF camera module 10 obtains the third frame of depth point cloud data 13; the volume representation unit 30 can process the third frame of depth point cloud data 13 with the truncated symbolic distance function to convert the third frame of depth point cloud data 13 into a volume representation; the volume representation fusion unit 50 can fuse the third frame depth point cloud data 13 based on volume representation and a volume-based fusion representation generated by fusing the first frame depth point cloud data 11 and the second frame depth point cloud data 12 based on volume representation to obtain a volume-based fusion representation of the scene 200 to be reconstructed under a global pose of the TOF camera module 10 when the third frame depth point cloud data 13 is acquired, and the triangulation processing unit 60 can triangulate the volume-based fusion representation of the scene 200 to be reconstructed to obtain a three-dimensional model of the scene 200 to be reconstructed under the global pose of the TOF camera module 10 when the third frame depth point cloud data 13 is acquired.
The three-dimensional scene reconstruction system further comprises a light projection unit 70, wherein the light projection unit 70 is operably connected to the volume representation fusion unit 50, and the light projection unit 70 is capable of performing light projection on the volume-based fusion representation of the scene 200 to be reconstructed to generate real-time display data of the scene 200 to be reconstructed.
The three-dimensional scene reconstruction system further comprises a noise reduction unit 80, wherein the noise reduction unit 80 is respectively and operatively connected to the TOF camera module 10 and the depth pyramid construction unit 20, and the noise reduction unit 80 can perform noise reduction on the first frame depth point cloud 11, the second frame depth point cloud 12 and the third frame depth point cloud 13 before constructing a depth pyramid corresponding to the first frame depth point cloud 11, the second frame depth point cloud 12 and the third frame depth point cloud 13, so as to improve subsequent data processing efficiency of the first frame depth point cloud 11, the second frame depth point cloud 12 and the third frame depth point cloud 13 and improve three-dimensional scene reconstruction efficiency.
The depth pyramid construction unit 20 can construct a point cloud data depth pyramid from the depth point cloud data acquired by the TOF camera module 10, so that the overall pose of the TOF camera module 10 can be calculated from coarse to fine, and the three-dimensional scene reconstruction is guaranteed to have higher precision and the three-dimensional scene reconstruction speed is increased.
The volume representation unit 30 can process the preprocessed first frame depth point cloud data 11, second frame depth point cloud data 12, and third frame depth point cloud data 13 by using a Truncated Symbolic Distance Function (TSDF) to obtain a volume representation corresponding to the first frame depth point cloud data 11, the second frame depth point cloud data 12, and the third frame depth point cloud data 13. Compared with the traditional point cloud or MESH representation method, the volume representation of the depth point cloud data by the truncated symbol Distance function can perform data fusion with smaller noise and higher quality, and has higher calculation speed compared with the SDF (signed Distance functions) representation.
The Point cloud stitching unit 40 can calculate, by using an ICP (Iterative Closest Point algorithm) algorithm, a global pose of the TOF camera module 10 when the TOF camera module 10 acquires the first frame depth Point cloud data 11, a volume representation of the first frame depth Point cloud data 11, and the second frame depth Point cloud data 12, by using the TOF camera module 10 to acquire the second frame depth Point cloud data 12, in combination with the TOF camera module 10 to acquire the first frame depth Point cloud data 11; or when the TOF camera module 10 acquires the second frame depth point cloud data 12, the global pose of the TOF camera module 10, the volume representation of the second frame depth point cloud data 12 and the global pose of the third frame depth point cloud data 13 are calculated by combining the TOF camera module 10 with the ICP algorithm.
The triangulation processing unit 60 can perform tracking of the TOF camera module by using all depth information, triangularization of a model is performed by using a wavefront method, high-quality geometric surface reconstruction can be completed in a changeable light environment, and particularly high-quality geometric surface reconstruction can be completed in an indoor changeable illumination environment.
The light projection unit 70 can perform light projection to the volume fusion representation of the point cloud data by using a light projection algorithm, so as to obtain a three-dimensional reconstruction scene at the current pose viewing angle, perform real-time display of a three-dimensional model, and process a partially dynamically changing environment, for example, a user moves in front of the scene 200 to be reconstructed, provide interactive display at any viewing angle, and also process interaction with a virtual object, virtual light, a foreground person, and the like.
The three-dimensional scene reconstruction system further comprises a display device 90, wherein the display device 90 is operatively connected to the triangularization unit 60, and the display device is configured to display a three-dimensional reconstructed model of the scene 200 to be reconstructed.
Referring to fig. 6 to 8, schematic diagrams of applications of the three-dimensional scene reconstruction system provided by the present invention are shown. Referring to fig. 6, a volumetric representation of a frame of depth point cloud data of the scene 200 to be reconstructed acquired by the TOF camera module 10 is shown. Referring to fig. 7, a volumetric representation of another frame of depth point cloud data of the scene 200 to be reconstructed acquired by the TOF camera module 10 is shown. Referring to fig. 8, a three-dimensional model of the scene 200 to be reconstructed, which is established by the three-dimensional scene reconstruction system according to the present invention based on two frames of the depth point cloud data acquired by the TOF camera module, is shown.
It will be understood by those skilled in the art that the scene 200 to be reconstructed may be any three-dimensional object or combination of three-dimensional objects in real life, and the three-dimensional scene reconstruction system is capable of constructing a three-dimensional model of the scene 200 to be reconstructed.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (11)

1. A method for reconstructing a three-dimensional scene, comprising:
acquiring first frame depth point cloud data of a scene to be reconstructed through a TOF camera module;
preprocessing the first frame of depth point cloud data to construct a depth pyramid corresponding to the first frame of depth point cloud data;
processing the preprocessed first frame depth point cloud data with a truncated symbolic distance function to convert the first frame depth point cloud data into a volumetric representation;
acquiring second frame depth point cloud data of the scene to be reconstructed through the TOF camera module;
preprocessing the second frame depth point cloud data to construct a depth pyramid corresponding to the second frame depth point cloud data;
performing point cloud splicing on the first frame of depth point cloud data and the second frame of depth point cloud data based on a depth pyramid corresponding to the first frame of depth point cloud data and a depth pyramid corresponding to the second frame of depth point cloud data to obtain a global pose when the TOF camera module acquires the second frame of depth point cloud data;
processing the preprocessed second frame depth point cloud data by using a truncated symbol distance function so as to convert the second frame depth point cloud data into volume representation;
fusing the first frame depth point cloud data represented based on the volume and the second frame depth point cloud data represented based on the volume to obtain fused representation of the scene to be reconstructed based on the volume when the TOF camera module acquires the second frame depth point cloud data; and
and responding to the scanning information of the TOF camera module for the scene to be reconstructed, and carrying out triangularization treatment on the fusion representation of the scene to be reconstructed based on the volume so as to obtain a three-dimensional model of the scene to be reconstructed under the visual angle of the TOF camera module for acquiring the second frame depth point cloud data.
2. The method of claim 1, wherein the merging the first frame of depth point cloud data based on volume representation with the second frame of depth point cloud data based on volume representation to obtain the volume-based merged representation of the scene to be reconstructed when the TOF camera module acquires the second frame of depth point cloud data, and the triangulating the volume-based merged representation of the scene to be reconstructed to obtain the three-dimensional model of the scene to be reconstructed at the viewing angle at which the TOF camera module acquires the second frame of depth point cloud data, in response to stopping the TOF camera module from scanning the scene to be reconstructed, further comprises:
acquiring third frame depth point cloud data of the scene to be reconstructed through the TOF camera module;
preprocessing the third frame depth point cloud data to construct a depth pyramid corresponding to the third frame depth point cloud data;
performing point cloud splicing on the second frame depth point cloud data and the third frame depth point cloud data based on a depth pyramid corresponding to the second frame depth point cloud data and a depth pyramid corresponding to the third frame depth point cloud data to obtain a global pose when the TOF camera module acquires the third frame depth point cloud data;
processing the preprocessed third frame point cloud data by a truncated symbol distance function so as to convert the third frame depth point cloud data into volume representation; and
and fusing the third frame depth point cloud data based on volume representation and the volume-based fused representation generated by fusing the first frame depth point cloud data and the second frame depth point cloud data based on volume representation to obtain the volume-based fused representation of the scene to be reconstructed when the TOF camera module acquires the third frame depth point cloud.
3. The three-dimensional scene reconstruction method according to claim 1 or 2, wherein the three-dimensional reconstruction method further comprises: and based on the current pose of the TOF camera module, carrying out ray projection processing on the fusion representation of the scene to be reconstructed based on the volume so as to generate real-time display data of the scene to be reconstructed.
4. The three-dimensional scene reconstruction method of claim 1 or 2, wherein before constructing the depth pyramid corresponding to the first frame of depth point cloud data, the second frame of depth point cloud data, and the third frame of depth point cloud data, further comprising: and carrying out noise reduction processing on the first frame depth point cloud data, the second frame depth point cloud data and the third frame depth point cloud data.
5. The three-dimensional scene reconstruction method according to claim 1 or 2, wherein the operating wavelength of the TOF camera module is 850 nm.
6. The three-dimensional scene reconstruction method according to claim 1 or 2, wherein the field angle of the TOF camera module is 60 ° (H) x 45 ° (V).
7. The three-dimensional reconstruction method according to claim 1 or 2, wherein the maximum frame rate of support of the TOF camera module is 30 frames/second.
8. A three-dimensional scene reconstruction system, comprising:
the TOF camera module can acquire a first frame depth point cloud data and a second frame depth point cloud data of a scene to be reconstructed;
the depth pyramid construction unit is operatively connected to the TOF camera module and can construct a depth pyramid corresponding to the first frame of depth point cloud data and the second frame of depth point cloud data;
a volume representation unit operably connected to the depth pyramid construction unit, the volume representation unit capable of processing the preprocessed first and second frames of depth point cloud data with a truncated symbolic distance function to convert the first and second frames of depth point cloud data into a volume representation;
the point cloud splicing unit is operably connected to the depth pyramid construction unit and can perform point cloud splicing on the first frame of depth point cloud data and the second frame of depth point cloud data based on a depth pyramid corresponding to the first frame of depth point cloud data and the second frame of depth point cloud data so as to obtain the current pose of the TOF camera module;
the volume representation fusion unit is operably connected to the volume representation unit and can fuse the first frame depth point cloud data and the second frame depth point cloud data based on volume representation so as to obtain a volume-based fusion representation of the scene to be reconstructed under the current pose of the TOF camera module; and
and the triangulation processing unit is operably connected to the volume representation fusion unit and can triangulate the volume-based fusion representation of the scene to be reconstructed so as to obtain a three-dimensional model of the scene to be reconstructed in the current pose of the TOF camera module.
9. The three-dimensional scene reconstruction system of claim 8, wherein the TOF camera module is further capable of acquiring a third frame of depth point cloud data; the depth pyramid construction unit can construct a depth pyramid corresponding to the third frame of depth point cloud data, and the point cloud splicing unit can perform point cloud splicing on the second frame of depth point cloud data and the third frame of depth point cloud data to obtain the current pose of the TOF camera module; the volume representation unit can process the third frame of depth point cloud data of the constructed depth pyramid by a truncated symbolic distance function so as to convert the third frame of depth point cloud data into volume representation; the volume representation fusion unit can fuse the third frame depth point cloud data based on volume representation and a volume-based fusion representation generated by fusing the first frame depth point cloud data and the second frame depth point cloud data based on volume representation to obtain a volume-based fusion representation of the scene to be reconstructed by the TOF camera module under the current pose, and the triangularization processing unit can triangulate the volume-based fusion representation of the scene to be reconstructed to obtain a three-dimensional model of the scene to be reconstructed by the TOF camera module under the current pose.
10. The three-dimensional scene reconstruction system of claim 8 or 9, wherein the three-dimensional scene reconstruction system further comprises a ray projection unit operatively connected to the volume representation fusion unit, the ray projection unit being capable of ray projecting the volume-based fused representation of the scene to be reconstructed to generate real-time display data of the scene to be reconstructed.
11. The three-dimensional scene reconstruction system of claim 8 or 9, wherein the three-dimensional scene reconstruction system further comprises a noise reduction unit operatively connected to the TOF camera module and the depth pyramid construction unit, respectively, and capable of performing noise reduction on the first frame depth point cloud data, the second frame depth point cloud data, and the third frame depth point cloud data before constructing a depth pyramid corresponding to the first frame depth point cloud data, the second frame depth point cloud data, and the third frame depth point cloud data.
CN201811587475.XA 2018-12-25 2018-12-25 Three-dimensional scene reconstruction method and system Pending CN111369678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811587475.XA CN111369678A (en) 2018-12-25 2018-12-25 Three-dimensional scene reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811587475.XA CN111369678A (en) 2018-12-25 2018-12-25 Three-dimensional scene reconstruction method and system

Publications (1)

Publication Number Publication Date
CN111369678A true CN111369678A (en) 2020-07-03

Family

ID=71211374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811587475.XA Pending CN111369678A (en) 2018-12-25 2018-12-25 Three-dimensional scene reconstruction method and system

Country Status (1)

Country Link
CN (1) CN111369678A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808253A (en) * 2021-08-31 2021-12-17 武汉理工大学 Dynamic object processing method, system, device and medium for scene three-dimensional reconstruction
CN114079781A (en) * 2020-08-18 2022-02-22 腾讯科技(深圳)有限公司 Data processing method, device and equipment for point cloud media and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN107809601A (en) * 2017-11-24 2018-03-16 深圳先牛信息技术有限公司 Imaging sensor
CN207587003U (en) * 2017-07-06 2018-07-06 幻视互动(北京)科技有限公司 A kind of three-dimensional reconstruction apparatus based on depth camera module

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN207587003U (en) * 2017-07-06 2018-07-06 幻视互动(北京)科技有限公司 A kind of three-dimensional reconstruction apparatus based on depth camera module
CN107809601A (en) * 2017-11-24 2018-03-16 深圳先牛信息技术有限公司 Imaging sensor

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079781A (en) * 2020-08-18 2022-02-22 腾讯科技(深圳)有限公司 Data processing method, device and equipment for point cloud media and storage medium
CN114079781B (en) * 2020-08-18 2023-08-22 腾讯科技(深圳)有限公司 Data processing method, device and equipment of point cloud media and storage medium
CN113808253A (en) * 2021-08-31 2021-12-17 武汉理工大学 Dynamic object processing method, system, device and medium for scene three-dimensional reconstruction
CN113808253B (en) * 2021-08-31 2023-08-15 武汉理工大学 Method, system, equipment and medium for processing dynamic object of three-dimensional reconstruction of scene

Similar Documents

Publication Publication Date Title
KR102524422B1 (en) Object modeling and movement method and device, and device
CN110335343B (en) Human body three-dimensional reconstruction method and device based on RGBD single-view-angle image
US11354840B2 (en) Three dimensional acquisition and rendering
CN104680582B (en) A kind of three-dimensional (3 D) manikin creation method of object-oriented customization
CN111294665B (en) Video generation method and device, electronic equipment and readable storage medium
US20170046868A1 (en) Method and apparatus for constructing three dimensional model of object
WO2017075932A1 (en) Gesture-based control method and system based on three-dimensional displaying
WO2014062663A1 (en) System and method for combining data from multiple depth cameras
EP3533218B1 (en) Simulating depth of field
JP2016537901A (en) Light field processing method
CN111739167B (en) 3D human head reconstruction method, device, equipment and medium
JP2023515669A (en) Systems and Methods for Depth Estimation by Learning Sparse Point Triangulation and Densification for Multiview Stereo
WO2023093739A1 (en) Multi-view three-dimensional reconstruction method
CN112862736B (en) Real-time three-dimensional reconstruction and optimization method based on points
Özbay et al. A voxelize structured refinement method for registration of point clouds from Kinect sensors
CN114298946B (en) Deep learning point cloud completion method for enhancing frame details
CN111369678A (en) Three-dimensional scene reconstruction method and system
CN111047678A (en) Three-dimensional face acquisition device and method
Lin et al. Extracting depth and radiance from a defocused video pair
Narducci et al. Enabling consistent hand-based interaction in mixed reality by occlusions handling
Chuchvara et al. A speed-optimized RGB-Z capture system with improved denoising capabilities
Fechteler et al. Articulated 3D model tracking with on-the-fly texturing
CN112308972A (en) Large-scale cable tunnel environment model reconstruction method
Han et al. 3D reconstruction of dense model based on the sparse frames using RGBD camera
Kalra et al. Labeled Hands in Wild

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination