CN108765548A - Three-dimensional scenic real-time reconstruction method based on depth camera - Google Patents
Three-dimensional scenic real-time reconstruction method based on depth camera Download PDFInfo
- Publication number
- CN108765548A CN108765548A CN201810380432.8A CN201810380432A CN108765548A CN 108765548 A CN108765548 A CN 108765548A CN 201810380432 A CN201810380432 A CN 201810380432A CN 108765548 A CN108765548 A CN 108765548A
- Authority
- CN
- China
- Prior art keywords
- depth
- depth camera
- camera
- point cloud
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The three-dimensional scenic real-time reconstruction method based on depth camera that the invention discloses a kind of, includes the following steps:The acquisition of original depth data and color data, depth data denoising, the conversion of depth image coordinate, the tracking of camera posture, the fusion of point cloud and record colouring information, point cloud model imply surface rendering, and model of place is rebuild.The present invention scans scene using single depth camera, obtain the depth data stream of scene, the real-time processing of complete paired data on computers, and generate corresponding threedimensional model, this method can provide inexpensive, easy to operate, high quality, the three-dimensional reconstruction system with real-time, generate high quality, visualize strong threedimensional model.
Description
Technical field
The present invention relates to three-dimensional reconstruction fields, are weighed in real time more particularly to a kind of three-dimensional scenic based on depth camera
Construction method.
Background technology
Three-dimensional reconstruction refers to the shape and appearance for capturing real-world object, establishes corresponding mathematical model, and calculating
It is handled in machine, operated and is analyzed.In computer graphics and computer vision field, three-dimensional reconstruction is solution
The certainly common means of three-dimensional modeling problem have 3D realistic modelings and large-scale and complex scenes modeling very important existing
Sincere justice.With the continuous renewal and development of computer hardware and depth data collecting device, three-dimensional reconstruction is in film, trip
Extensive and deep application has been obtained in the fields such as play, medical assistance, urban planning.
According to the acquisition modes of data, dimensional Modeling Technology can be divided into contact and contactless two major classes.
Contact:Contact is that the three-dimensional data of target object is directly measured using professional equipment.Although this method obtains
Three-dimensional data it is more accurate, but due to needed in measurement process contact testee, testee may be caused to damage,
So application range is restricted.
It is contactless:Contactless is that need not contact measured object when measuring, but pass through light, sound, magnetic
The physical medias such as field carry out the three-dimensional data of indirect gain target object.According to whether active emitted energy is for sensing, it is non-contact
Formula scanning is divided into as passive type and active two classes method.Wherein, passive type scan method is not led generally during sensing
Dynamic emitted energy, camera obtain the image of target object, target object are then calculated according to the reflection of naturally light
Three-dimensional spatial information.Passive type modeling method equipment cost is low, and the support of special hardware is not necessarily to during scanning, can be used for
The three-dimensional reconstruction of fairly large scene.However there is apparent defects for passive type scan method, to body surface illumination texture
Condition has strict demand.Active scan method emits light source or energy source to target object surface, is thrown according to active energy
The conversion rule in process time and space is penetrated to calculate three-dimensional information.Common active energy has laser, sound wave, electromagnetic wave
Deng.Active modeling method equipment cost is more expensive and complicated for operation.
Microsoft has issued a depth camera Kinect, causes the extensive concern of researchers.It is swept with traditional three-dimensional
It retouches equipment to compare, Kinect has compact structure, easily operated, cheap advantage, and can synchronous acquisition object in real time
The depth and colour information in body surface face.Since Kinect has the function of actively emitting near infrared light, so in gathered data
It is not easy to be influenced by the illumination of testee surface and texture variations in the process.The appearance of Kinect caused computer graphics,
The technological change in the fields such as computer vision provides a kind of completely new means, at present in human-computer interaction, void for visual research
The fields such as quasi- reality have obtained extensive and deep application.
It is multiplied by neighbouring two-point crossover in KinectFusion algorithms and estimates the vector on vertex, such method exists
Inexactness.The characteristic of binding site cloud distribution, by analyze sample vertex and adjacent four groups of samples at point set association
Variance estimates the vector on vertex to improve the accuracy of vector estimation;In the registration process of point cloud, every group of corresponding points are solving change
Changing has different contribution degrees, KinectFusion algorithms point different to these contribution degrees in solution procedure in parametric procedure
Identical processing is taken, the result of suboptimum may be caused.By the way that qualified matching double points are assigned with different certainty factors, with
Improve the stability of camera posture tracking;The weight stored in the voxel of current time and previous moment in KinectFusion
Coefficient is identical, in this way can more rapidly, more because the data at current time need to be integrated into previous moment model as early as possible
The accurately dynamic change in responding scene.But when camera is when identical position residence time is long, above-mentioned TSDF
Value update method just seems unstable.Weight 1 is assigned by the TSDF values to current time, in this way it is ensured that only connecing
When receiving important depth data, the value stored in voxel can be just updated, the influence without will receive noise, therefore passing point
The data stored in cube voxel after cloud fusion just can be more smooth and accurate.Present in Kinect Fusion not
Foot, is problem to be solved by this invention.
Invention content
The three-dimensional scenic real-time reconstruction method based on depth camera that technical problem to be solved by the invention is to provide a kind of,
High quality can be generated, visualize strong threedimensional model.
In order to solve the above technical problems, one aspect of the present invention is:It provides a kind of based on depth camera
Three-dimensional scenic real-time reconstruction method, includes the following steps:
S1, experimental demand, the compiling function library mutually compatible with depth camera, are arranged experimental system environmental variance;
S2, mobile depth camera, measure Same Scene from different visual angles, obtain the continuous depth number of multiframe
According to, and denoising is carried out to depth data;
S3, the depth image by denoising in different coordinates is transformed into unified world coordinates, is obtained
Three-dimensional point cloud model;
S4, the rigid body translation matrix by calculating six direction degree of freedom, by current point cloud data and existing reference point
Cloud model is aligned, and exports the Relative Transformation parameter between two continuous frames, and is initialized as camera posture when next frame alignment;
S5, after registration, fusion treatment is carried out to the repetition point cloud data of overlapping region, simplifies a point cloud, and merge
Colouring information;
S6, scene rendering is carried out in real time, user is instructed to plan the movement locus of depth camera;
S7, triangle gridding processing is carried out to point cloud model using MC algorithms, generates visual threedimensional model.
In a preferred embodiment of the present invention, the step S1 specific implementations step includes:
S11, first fitting depth camera apparatus driver are built on this basis based on depth camera and OpenNI
Data acquisition module;
S12, VTK, OpenCV Open Source Code are compiled using Cmake, obtain third party's independent function library, and configure
Corresponding system environment variable;
S13, the installation libraries Eigen and CUDA, and configure corresponding system environment variable.
In a preferred embodiment of the present invention, the step S2 specific implementations step includes:
S21, depth camera is connected on computer by USB interface;
S22, hand-held depth camera are kept away from object preset distance, and uniform speed slow moves depth camera to scan scene,
Obtain data flow;
S23, smoothing denoising processing is carried out to data using bilateral filtering algorithm.
Further, the preset distance is 0.4-3.5m.
In a preferred embodiment of the present invention, the step S3 specific implementations step includes:
S31, by each pixel u perspective projections to camera space in the two dimensional image Jing Guo denoising, obtain
Corresponding vertex v (u);
S32, the depth image of each frame is copied in GPU, corresponding global space vertex, which is calculated, using GPU reflects
Penetrate vg,i(x,y)。
S33, after obtaining the world coordinate system of new depth map, system concurrently calculates the normal direction on each vertex on GPU
Amount.
Further, the specific calculating process of normal vector includes in the step S33:
S331, according to the structure feature of Kinect point cloud datas, selected point Vi4 Neighbor Points of (x, y), respectively:Vi(x
+ 1, y), Vi(x-1, y), Vi(x, y+1), Vi(x,y-1);
S332, V is calculatediThe barycenter of the neighborhood of (x, y)
S333, pass through analysis site ViAnd its Neighbor Points, obtain three rank covariance matrix C:
S334, feature decomposition is carried out to covariance matrix C:
S335, eigenvalue λ1Corresponding feature vector v1As point ViLocate the normal vector of tangent plane.
In a preferred embodiment of the present invention, the step S4 specific implementations step includes:
S41, two continuous frames are searched by the point cloud unification to world coordinate system of successive frame by step S3 coordinate transforms
Overlapping region between point cloud, chooses depth difference and normal vector angle meets the point pair of threshold condition, is determined as matching double points;
S42, according to corresponding points pair optimal transformation is solved using the ICP algorithm minimum object function based on point-to-plane distance
Parameter Topt;
S43, pass through ToptWith the present frame camera Attitude estimation T of reference point cloudsi, you can find out the corresponding position of Current camera
It sets and orientation information;
Further, the depth difference and normal vector threshold value are respectively 0.25 and 20 °.
Further, the step S42 solves optimal transformation parameter ToptDetailed process include:
S421, a corresponding certainty factor w is assigned for each pair of match pointi:
Wherein, Φd(x) it is kernel function, is defined as:δdRespectively distance threshold;Φn(x) it is core letter
Number, is defined as:δnRespectively angle threshold value.
S422, object function is redefined:
S423, object function is solved using linear least square.
In a preferred embodiment of the present invention, the step S5 specific implementations step includes:
S51, opened up in GPU memory headrooms two resolution ratio be 5123Body space, be respectively used to storage point cloud sky
Between geological information and colouring information;
S52, the central point of the voxel v corresponding pixel u on depth image is found out by projection equation π;
S53, the world coordinates that the corresponding points of pixel u in three dimensions on depth image are found out using coordinate conversion
pg;
S54, calculating put the corresponding points p in the voxel v and three dimensions in cloud body spacegBetween directed distance sdfi;
S55, directed distance sdf is blockediInto a defined value range;
S56, sdf is foundi=0 zero cross point determines the position of model surface, while in the correspondence position in colour bodies space
Set record colouring information;
S57, update TSDF values fi(v)avgWith weight coefficient wi(v)。
wi=min (Max_Weight, wi-1+1)
The beneficial effects of the invention are as follows:
(1) present invention scans scene using single depth camera, obtains the depth data stream of scene, completes on computers
Real-time processing to data, and corresponding threedimensional model is generated, this method can provide inexpensive, easy to operate, high quality, have
The three-dimensional reconstruction system of real-time generates high quality, visualizes strong threedimensional model;
(2) present invention optimizes the calculating of vertex vector, and more accurate vector is provided for follow-up calculating;
(3) present invention assigns different certainty factors, to different match points during point cloud registering for each pair of match point
Different processing is carried out to improve the stability of camera posture tracking;
(4) present invention optimization TSDF weights assign weight 1, it is ensured that only receive important to the TSDF values at current time
Depth data when, the value stored in voxel can just update, and ensure more smooth and accurate after point cloud fusion.
Description of the drawings
Fig. 1 is the flow chart of one preferred embodiment of three-dimensional scenic real-time reconstruction method the present invention is based on depth camera;
Fig. 2 is depth image transfer process schematic diagram;
Fig. 3 is that data for projection is associated with schematic diagram.
Specific implementation mode
The preferred embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings, so that advantages and features of the invention energy
It is easier to be readily appreciated by one skilled in the art, so as to make a clearer definition of the protection scope of the present invention.
Referring to Fig. 1, the embodiment of the present invention includes:
A kind of three-dimensional scenic real-time reconstruction method based on depth camera, includes the following steps:
S1, experimental demand, the compiling function library mutually compatible with hardware system, are arranged system environment variable;With peace
For filling Kinect depth cameras, it is as follows:
S11, Kinect device driver is installed first, builds the data based on Kinect+OpenNI on this basis
Acquisition module;
S12, VTK, OpenCV Open Source Code are compiled with Cmake, obtain the third party mutually compatible with experimental system
Independent function library, and configure corresponding system environment variable:
1, it is compiled respectively using Cmake and generates VTK.sln and OpenCV.sln solution files;
2, VTK.sln and OpenCV.sln project files are opened respectively in VS2010, in Debug and Release environment
Lower compiling generates solution;
3, the path of function library is set in system environment variable;
S13, the installation libraries Eigen and CUDA, and configure corresponding system environment variable;
S2, mobile Kinect depth cameras, measure Same Scene from different visual angles, it is continuously deep to obtain multiframe
Degrees of data, and denoising is carried out to depth data;It is as follows:
S21, Kinect depth cameras are connected on computer by USB interface;
S22, user hold Kinect depth cameras, keep 0.4 meter away from object~3.5 meters of distance, the shifting of uniform speed slow
Dynamic Kinect scans scene, obtains data flow;
S23, smoothing denoising processing is carried out to data using bilateral filtering algorithm:
1, the Neighbor Points of search pixel point (i, j);
2, gradation of image domain weight w is calculatedrWith spatial domain weight ws:
Wherein, σsIndicate the weight coefficient of Gaussian filter, σrIndicate that gradation of image difference weight coefficient, the two coefficients are determined jointly
The performance of two-sided filter is determined.
3, filtered depth pixel value is calculated:
Wherein, I is raw noise image, I ' be by smoothing denoising treated image, Ω indicate with pixel (i, j) be
The rectangular area at center, w (i, j) are weights of the filter at pixel (i, j), wpFor normalized parameter.
S3, the depth image by denoising in different coordinates is transformed into unified world coordinates, is obtained
Three-dimensional point cloud model;As shown in Fig. 2, being as follows:
S31, by each pixel u perspective projections to camera space in the two dimensional image Jing Guo denoising, obtain
The corresponding vertex v (u) of depth image:
V (u)=D (u) K-1[u,1]
Wherein, K is projection matrix, (cx,cy) indicate image centre coordinate, fxAnd fyIndicate the focal length of depth camera.
S32, the depth image of each frame is copied to GPU (Graphics Processing Unit, graphics processor)
In, it is quickly obtained corresponding three dimensions vertex using the computation capability of GPU and maps vg,i(x,y):
Wherein, TiFor the rigid body translation matrix of moment i, by the camera pose T of previous framei-1Initialization obtains, because
The motion amplitude of front and back two frames is little in Kinect motion processes, so it is believed that the camera posture of present frame is by previous frame phase
It is obtained after machine posture progress small variations.6DOF (six direction degree of freedom) in rigid body translation matrix description world coordinate system
Camera posture, TiDefinition be:
Wherein, RiIt is one 3 × 3 spin matrix, tiIt is a translation vector.
S33, after obtaining the world coordinate system of new depth map, system concurrently calculates the normal direction on each vertex on GPU
Amount, is as follows:
1, according to the structure feature of Kinect point cloud datas, point V is searchediThe k=4 Neighbor Points V of (x, y)i(x+1, y), Vi
(x-1, y), Vi(x, y+1), Vi(x,y-1);Calculate ViThe barycenter V of the neighborhood of (x, y):
2, pass through analysis site ViAnd its Neighbor Points, obtain three rank covariance matrix C:
3, feature decomposition is carried out to covariance matrix C:
4, eigenvalue λ1Corresponding feature vector v1As point ViLocate the normal vector of tangent plane.
S4, the rigid body translation matrix by calculating six direction degree of freedom, by new point cloud data and existing rendering mould
Type is aligned, and exports the Relative Transformation parameter between two continuous frames, and is initialized as camera posture when next frame alignment;Specific steps
It is as follows:
S41, two continuous frames are searched by the point cloud unification to world coordinate system of successive frame by step S3 coordinate transforms
Overlapping region between point cloud, determines that matching double points, detailed process are as follows:
1, using data for projection correlation method by the source point cloud s under world coordinate systemiWith target point cloud diProject to camera
On imaging plane, as shown in Figure 3;
2, the depth difference Δ d and normal vector angle Δ θ between candidate matches point pair are calculated;
Δ d=| | si-di||
3, selection is projected in same position and depth difference and normal vector angle are satisfied by the point of certain threshold condition to conduct
Correct matching double points, specific decision condition are:
||si-di| | < ε
Wherein, ε=0.25, θ=20 °.When corresponding points are to siAnd diBetween Euclidean distance and normal vector angle it is bigger, then
This to be error matching points pair probability it is bigger, so then think when more than threshold value the matching double points be mistake match point
It is right.
S42, according to corresponding points pair optimal transformation is solved using the ICP algorithm minimum object function based on point-to-plane distance
Parameter Topt, it is as follows shown:
1, it is that each pair of match point assigns a corresponding certainty factor wi:
Wherein, Φd(x) it is kernel function, is defined as:δdRespectively distance threshold;Φn(x) it is core letter
Number, is defined as:δnRespectively angle threshold value.Certainty factor is bigger, and the reliability of match point is higher, uses true more
The big match point of reliability can improve the accuracy and stability of the tracking of camera posture.
2, the basis for estimation using point-to-plane distance variance as object function, redefines object function:
3, object function is solved using linear least square.
S43, pass through ToptWith the initial camera posture T of reference point cloudsiThe corresponding position and orientation of Current camera can be found out
Information.
S5, after registration, fusion treatment is carried out to the repetition point cloud data of overlapping region, simplifies a point cloud, and merge
Colouring information is as follows shown:
S51, opened up in GPU memory headrooms two resolution ratio be 5123Body space, be respectively used to storage point cloud sky
Between geological information and colouring information:
1, each voxel memory space geological information in cloud body space, with composite variable Φ (v)=(f (v), w
(v)) it indicates, wherein f (v) expressions block distance function (TSDF) calculated voxel to the relative distance of scene actual surface
Value.TSDF values are timing, indicate that the voxel is located at the outside of model surface;When TSDF is negative, indicate that the voxel is located at model table
The inside in face;Zero crossing is that a point of model surface is corresponded at f (v)=0 at the real place of model surface.w(v)
For the weight coefficient of f (v) values in voxel v;
2, in colour bodies space, store color data Cn(p), wherein p ∈ N3It is three-dimensional seat of the voxel in body space
Mark, Cn(p) it indicates to have merged the 1 color data collection for arriving n frames.
S52, in cloud body space, the central point of the voxel v corresponding picture on depth image is found out by projection equation π
Vegetarian refreshments u:
S53, the world coordinates that the corresponding points of pixel u in three dimensions on depth image are found out using coordinate conversion
pg;
S54, calculating put the corresponding points p in the voxel v and three dimensions in cloud body spacegBetween directed distance sdfi:
sdfi=‖ v-ti||-‖pg-ti‖
S55, directed distance sdf is blockediInto a defined value range, its corresponding TSDF values Δ f is obtained:
Wherein, thr indicates preset and blocks maximum value, thr=0.03.
S56, the zero cross point for finding Δ f=0, determine the position of model surface, while in the correspondence position in colour bodies space
Set record colouring information;
S57, depth camera can constantly generate new point cloud data in moving process, and all work as in depth camera
The TSDF values f of spatial voxel under preceding visual anglei(v)avgWith weight coefficient wi(v) it is required for being updated:
wi=min (Max_Weight, wi-1+1)
S6, scene rendering is carried out in real time, user is instructed to plan the movement locus of depth camera;
Due to being influenced by body surface material and depth camera performance, with different view to scene the same area space
When carrying out depth data acquisition, the depth information of gained can have different errors, to depth data in cloud fusion process
The influence of error can be effectively eliminated by being weighted processing, be stored in three-dimensional voxel model after passing point cloud fusion treatment
TSDF values can become more smooth and accurate.
S7, triangle gridding processing is carried out to point cloud model using MC algorithms, generates visual threedimensional model.
Example the above is only the implementation of the present invention is not intended to limit the scope of the invention, every to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of three-dimensional scenic real-time reconstruction method based on depth camera, includes the following steps:
S1, experimental demand, the compiling function library mutually compatible with depth camera, are arranged experimental system environmental variance;
S2, mobile depth camera, measure Same Scene from different visual angles, obtain the continuous depth data of multiframe, and
Denoising is carried out to depth data;
S3, the depth image by denoising in different coordinates is transformed into unified world coordinates, is obtained three-dimensional
Point cloud model;
S4, the rigid body translation matrix by calculating six direction degree of freedom, by current point cloud data and existing reference point clouds mould
Type is aligned, and exports the Relative Transformation parameter between two continuous frames, and is initialized as camera posture when next frame alignment;
S5, after registration, fusion treatment is carried out to the repetition point cloud data of overlapping region, simplifies a point cloud, and Fusion of Color
Information;
S6, scene rendering is carried out in real time, user is instructed to plan the movement locus of depth camera;
S7, triangle gridding processing is carried out to point cloud model using MC algorithms, generates visual threedimensional model.
2. the three-dimensional scenic real-time reconstruction method according to claim 1 based on depth camera, which is characterized in that the step
Rapid S1 implements step:
S11, first fitting depth camera apparatus driver build the data based on depth camera and OpenNI on this basis
Acquisition module;
S12, VTK, OpenCV Open Source Code are compiled using Cmake, obtain third party's independent function library, and configured corresponding
System environment variable;
S13, the installation libraries Eigen and CUDA, and configure corresponding system environment variable.
3. the three-dimensional scenic real-time reconstruction method according to claim 1 based on depth camera, which is characterized in that the step
Rapid S2 implements step:
S21, depth camera is connected on computer by USB interface;
S22, hand-held depth camera are kept away from object preset distance, and uniform speed slow moves depth camera to scan scene,
Obtain data flow;
S23, smoothing denoising processing is carried out to data using bilateral filtering algorithm.
4. the three-dimensional scenic real-time reconstruction method according to claim 3 based on depth camera, which is characterized in that described pre-
Set a distance is 0.4-3.5m.
5. the three-dimensional scenic real-time reconstruction method according to claim 1 based on depth camera, which is characterized in that the step
Rapid S3 implements step:
S31, by each pixel u perspective projections to camera space in the two dimensional image Jing Guo denoising, corresponded to
Vertex v (u);
S32, the depth image of each frame is copied in GPU, corresponding global space vertex, which is calculated, using GPU maps
vg,i(x,y)。
S33, after obtaining the world coordinate system of new depth map, system concurrently calculates the normal vector on each vertex on GPU.
6. the three-dimensional scenic real-time reconstruction method according to claim 5 based on depth camera, which is characterized in that the step
The specific calculating process of normal vector includes in rapid S33:
S331, according to the structure feature of Kinect point cloud datas, selected point Vi4 Neighbor Points of (x, y), respectively:Vi(x+1,
Y), Vi(x-1, y), Vi(x, y+1), Vi(x,y-1);
S332, V is calculatediThe barycenter of the neighborhood of (x, y)
S333, pass through analysis site ViAnd its Neighbor Points, obtain three rank covariance matrix C:
S334, feature decomposition is carried out to covariance matrix C:
S335, eigenvalue λ1Corresponding feature vector v1As point ViLocate the normal vector of tangent plane.
7. the three-dimensional scenic real-time reconstruction method according to claim 1 based on depth camera, which is characterized in that the step
Rapid S4 implements step:
S41, by step S3 coordinate transforms two continuous frames point cloud is searched by the point cloud unification to world coordinate system of successive frame
Between overlapping region, choose the point pair that depth difference and normal vector angle meet threshold condition, be determined as matching double points;
S42, according to corresponding points pair optimal transformation parameter is solved using the ICP algorithm minimum object function based on point-to-plane distance
Topt;
S43, pass through ToptWith the present frame camera Attitude estimation T of reference point cloudsi, you can find out the corresponding position of Current camera and court
To information.
8. the three-dimensional scenic real-time reconstruction method according to claim 7 based on depth camera, which is characterized in that the depth
Degree difference and normal vector threshold value are respectively 0.25 and 20 °.
9. the three-dimensional scenic real-time reconstruction method according to claim 7 based on depth camera, which is characterized in that the step
Rapid S42 solves optimal transformation parameter ToptDetailed process include:
S421, a corresponding certainty factor w is assigned for each pair of match pointi:
Wherein, Φd(x) it is kernel function, is defined as:δdRespectively distance threshold;Φn(x) it is kernel function, it is fixed
Justice is:δnRespectively angle threshold value.
S422, object function is redefined:
S423, object function is solved using linear least square.
10. the three-dimensional scenic real-time reconstruction method according to claim 1 based on depth camera, which is characterized in that described
Step S5 implements step:
S51, opened up in GPU memory headrooms two resolution ratio be 5123Body space, be respectively used to storage point cloud space geometry
Information and colouring information;
S52, the central point of the voxel v corresponding pixel u on depth image is found out by projection equation π;
S53, the world coordinates p that the corresponding points of pixel u in three dimensions on depth image are found out using coordinate conversiong;
S54, calculating put the corresponding points p in the voxel v and three dimensions in cloud body spacegBetween directed distance sdfi;
S55, directed distance sdf is blockediInto a defined value range;
S56, sdf is foundi=0 zero cross point determines the position of model surface, while on the corresponding position in colour bodies space
Record colouring information;
S57, update TSDF values fi(v)avgWith weight coefficient wi(v)。
wi=min (Max_Weight, wi-1+1) 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810380432.8A CN108765548A (en) | 2018-04-25 | 2018-04-25 | Three-dimensional scenic real-time reconstruction method based on depth camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810380432.8A CN108765548A (en) | 2018-04-25 | 2018-04-25 | Three-dimensional scenic real-time reconstruction method based on depth camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108765548A true CN108765548A (en) | 2018-11-06 |
Family
ID=64011781
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810380432.8A Pending CN108765548A (en) | 2018-04-25 | 2018-04-25 | Three-dimensional scenic real-time reconstruction method based on depth camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108765548A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109658448A (en) * | 2018-11-29 | 2019-04-19 | 武汉中地地科传媒文化有限责任公司 | A kind of product introduction method and system based on body feeling interaction |
CN109669219A (en) * | 2019-01-16 | 2019-04-23 | 武汉市工程科学技术研究院 | Tunnel prediction method based on three-dimensional reconstruction |
CN109741382A (en) * | 2018-12-21 | 2019-05-10 | 西安科技大学 | A kind of real-time three-dimensional method for reconstructing and system based on Kinect V2 |
CN109756660A (en) * | 2019-01-04 | 2019-05-14 | Oppo广东移动通信有限公司 | Electronic equipment and mobile platform |
CN109816765A (en) * | 2019-02-11 | 2019-05-28 | 清华-伯克利深圳学院筹备办公室 | Texture towards dynamic scene determines method, apparatus, equipment and medium in real time |
CN110096144A (en) * | 2019-04-08 | 2019-08-06 | 汕头大学 | A kind of interaction holographic projection methods and system based on three-dimensional reconstruction |
CN110120093A (en) * | 2019-03-25 | 2019-08-13 | 深圳大学 | Three-dimensional plotting method and system in a kind of room RGB-D of diverse characteristics hybrid optimization |
CN110211212A (en) * | 2019-06-05 | 2019-09-06 | 西北工业大学 | A kind of electromagnet data interaction formula visual analysis method based on VTK |
CN110443842A (en) * | 2019-07-24 | 2019-11-12 | 大连理工大学 | Depth map prediction technique based on visual angle fusion |
CN110458939A (en) * | 2019-07-24 | 2019-11-15 | 大连理工大学 | The indoor scene modeling method generated based on visual angle |
CN110633628A (en) * | 2019-08-02 | 2019-12-31 | 杭州电子科技大学 | RGB image scene three-dimensional model reconstruction method based on artificial neural network |
CN110706332A (en) * | 2019-09-25 | 2020-01-17 | 北京计算机技术及应用研究所 | Scene reconstruction method based on noise point cloud |
CN110853135A (en) * | 2019-10-31 | 2020-02-28 | 天津大学 | Indoor scene real-time reconstruction tracking service method based on endowment robot |
CN111060006A (en) * | 2019-04-15 | 2020-04-24 | 深圳市易尚展示股份有限公司 | Viewpoint planning method based on three-dimensional model |
CN111223180A (en) * | 2020-01-08 | 2020-06-02 | 中冶赛迪重庆信息技术有限公司 | Three-dimensional modeling method and device for stock ground, storage medium and electronic terminal |
CN111476907A (en) * | 2020-04-14 | 2020-07-31 | 青岛小鸟看看科技有限公司 | Positioning and three-dimensional scene reconstruction device and method based on virtual reality technology |
CN111626929A (en) * | 2020-04-28 | 2020-09-04 | Oppo广东移动通信有限公司 | Depth image generation method and device, computer readable medium and electronic equipment |
CN111968238A (en) * | 2020-08-22 | 2020-11-20 | 晋江市博感电子科技有限公司 | Human body color three-dimensional reconstruction method based on dynamic fusion algorithm |
CN113192206A (en) * | 2021-04-28 | 2021-07-30 | 华南理工大学 | Three-dimensional model real-time reconstruction method and device based on target detection and background removal |
CN113256789A (en) * | 2021-05-13 | 2021-08-13 | 中国民航大学 | Three-dimensional real-time human body posture reconstruction method |
CN113643436A (en) * | 2021-08-24 | 2021-11-12 | 凌云光技术股份有限公司 | Depth data splicing and fusing method and device |
CN113643346A (en) * | 2021-07-28 | 2021-11-12 | 杭州易现先进科技有限公司 | Scene reconstruction method and scanning device |
CN113902846A (en) * | 2021-10-11 | 2022-01-07 | 岱悟智能科技(上海)有限公司 | Indoor three-dimensional modeling method based on monocular depth camera and mileage sensor |
WO2022040970A1 (en) * | 2020-08-26 | 2022-03-03 | 南京翱翔信息物理融合创新研究院有限公司 | Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration |
CN114444158A (en) * | 2020-11-04 | 2022-05-06 | 北京瓦特曼科技有限公司 | Underground roadway deformation early warning method and system based on three-dimensional reconstruction |
CN117994444A (en) * | 2024-04-03 | 2024-05-07 | 浙江华创视讯科技有限公司 | Reconstruction method, device and storage medium of complex scene |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
US20160321838A1 (en) * | 2015-04-29 | 2016-11-03 | Stmicroelectronics S.R.L. | System for processing a three-dimensional (3d) image and related methods using an icp algorithm |
CN106803267A (en) * | 2017-01-10 | 2017-06-06 | 西安电子科技大学 | Indoor scene three-dimensional rebuilding method based on Kinect |
-
2018
- 2018-04-25 CN CN201810380432.8A patent/CN108765548A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
US20160321838A1 (en) * | 2015-04-29 | 2016-11-03 | Stmicroelectronics S.R.L. | System for processing a three-dimensional (3d) image and related methods using an icp algorithm |
CN106803267A (en) * | 2017-01-10 | 2017-06-06 | 西安电子科技大学 | Indoor scene three-dimensional rebuilding method based on Kinect |
Non-Patent Citations (3)
Title |
---|
丹熙方: "《基于Kinect的室内场景实时三维重建》", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
叶日藏: "《基于Kinect深度传感器的三维重建技术应用研究》", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
宋立鹏: "《室外场景三维点云数据的分割与分类》", 《大连理工大学》 * |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109658448A (en) * | 2018-11-29 | 2019-04-19 | 武汉中地地科传媒文化有限责任公司 | A kind of product introduction method and system based on body feeling interaction |
CN109741382A (en) * | 2018-12-21 | 2019-05-10 | 西安科技大学 | A kind of real-time three-dimensional method for reconstructing and system based on Kinect V2 |
CN109756660A (en) * | 2019-01-04 | 2019-05-14 | Oppo广东移动通信有限公司 | Electronic equipment and mobile platform |
CN109756660B (en) * | 2019-01-04 | 2021-07-23 | Oppo广东移动通信有限公司 | Electronic equipment and mobile platform |
CN109669219A (en) * | 2019-01-16 | 2019-04-23 | 武汉市工程科学技术研究院 | Tunnel prediction method based on three-dimensional reconstruction |
CN109816765A (en) * | 2019-02-11 | 2019-05-28 | 清华-伯克利深圳学院筹备办公室 | Texture towards dynamic scene determines method, apparatus, equipment and medium in real time |
CN109816765B (en) * | 2019-02-11 | 2023-06-27 | 清华-伯克利深圳学院筹备办公室 | Method, device, equipment and medium for determining textures of dynamic scene in real time |
CN110120093A (en) * | 2019-03-25 | 2019-08-13 | 深圳大学 | Three-dimensional plotting method and system in a kind of room RGB-D of diverse characteristics hybrid optimization |
CN110096144A (en) * | 2019-04-08 | 2019-08-06 | 汕头大学 | A kind of interaction holographic projection methods and system based on three-dimensional reconstruction |
CN110096144B (en) * | 2019-04-08 | 2022-11-15 | 汕头大学 | Interactive holographic projection method and system based on three-dimensional reconstruction |
CN111060006A (en) * | 2019-04-15 | 2020-04-24 | 深圳市易尚展示股份有限公司 | Viewpoint planning method based on three-dimensional model |
CN110211212A (en) * | 2019-06-05 | 2019-09-06 | 西北工业大学 | A kind of electromagnet data interaction formula visual analysis method based on VTK |
CN110458939A (en) * | 2019-07-24 | 2019-11-15 | 大连理工大学 | The indoor scene modeling method generated based on visual angle |
CN110443842A (en) * | 2019-07-24 | 2019-11-12 | 大连理工大学 | Depth map prediction technique based on visual angle fusion |
CN110458939B (en) * | 2019-07-24 | 2022-11-18 | 大连理工大学 | Indoor scene modeling method based on visual angle generation |
CN110633628B (en) * | 2019-08-02 | 2022-05-06 | 杭州电子科技大学 | RGB image scene three-dimensional model reconstruction method based on artificial neural network |
CN110633628A (en) * | 2019-08-02 | 2019-12-31 | 杭州电子科技大学 | RGB image scene three-dimensional model reconstruction method based on artificial neural network |
CN110706332A (en) * | 2019-09-25 | 2020-01-17 | 北京计算机技术及应用研究所 | Scene reconstruction method based on noise point cloud |
CN110706332B (en) * | 2019-09-25 | 2022-05-17 | 北京计算机技术及应用研究所 | Scene reconstruction method based on noise point cloud |
CN110853135A (en) * | 2019-10-31 | 2020-02-28 | 天津大学 | Indoor scene real-time reconstruction tracking service method based on endowment robot |
CN111223180A (en) * | 2020-01-08 | 2020-06-02 | 中冶赛迪重庆信息技术有限公司 | Three-dimensional modeling method and device for stock ground, storage medium and electronic terminal |
CN111476907A (en) * | 2020-04-14 | 2020-07-31 | 青岛小鸟看看科技有限公司 | Positioning and three-dimensional scene reconstruction device and method based on virtual reality technology |
CN111626929B (en) * | 2020-04-28 | 2023-08-08 | Oppo广东移动通信有限公司 | Depth image generation method and device, computer readable medium and electronic equipment |
CN111626929A (en) * | 2020-04-28 | 2020-09-04 | Oppo广东移动通信有限公司 | Depth image generation method and device, computer readable medium and electronic equipment |
CN111968238A (en) * | 2020-08-22 | 2020-11-20 | 晋江市博感电子科技有限公司 | Human body color three-dimensional reconstruction method based on dynamic fusion algorithm |
WO2022040970A1 (en) * | 2020-08-26 | 2022-03-03 | 南京翱翔信息物理融合创新研究院有限公司 | Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration |
CN114444158A (en) * | 2020-11-04 | 2022-05-06 | 北京瓦特曼科技有限公司 | Underground roadway deformation early warning method and system based on three-dimensional reconstruction |
CN113192206B (en) * | 2021-04-28 | 2023-04-07 | 华南理工大学 | Three-dimensional model real-time reconstruction method and device based on target detection and background removal |
CN113192206A (en) * | 2021-04-28 | 2021-07-30 | 华南理工大学 | Three-dimensional model real-time reconstruction method and device based on target detection and background removal |
CN113256789A (en) * | 2021-05-13 | 2021-08-13 | 中国民航大学 | Three-dimensional real-time human body posture reconstruction method |
CN113643346A (en) * | 2021-07-28 | 2021-11-12 | 杭州易现先进科技有限公司 | Scene reconstruction method and scanning device |
CN113643436A (en) * | 2021-08-24 | 2021-11-12 | 凌云光技术股份有限公司 | Depth data splicing and fusing method and device |
CN113643436B (en) * | 2021-08-24 | 2024-04-05 | 凌云光技术股份有限公司 | Depth data splicing and fusion method and device |
CN113902846A (en) * | 2021-10-11 | 2022-01-07 | 岱悟智能科技(上海)有限公司 | Indoor three-dimensional modeling method based on monocular depth camera and mileage sensor |
CN113902846B (en) * | 2021-10-11 | 2024-04-12 | 岱悟智能科技(上海)有限公司 | Indoor three-dimensional modeling method based on monocular depth camera and mileage sensor |
CN117994444A (en) * | 2024-04-03 | 2024-05-07 | 浙江华创视讯科技有限公司 | Reconstruction method, device and storage medium of complex scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765548A (en) | Three-dimensional scenic real-time reconstruction method based on depth camera | |
Pusztai et al. | Accurate calibration of LiDAR-camera systems using ordinary boxes | |
Zhou et al. | Canny-vo: Visual odometry with rgb-d cameras based on geometric 3-d–2-d edge alignment | |
CN103988226B (en) | Method for estimating camera motion and for determining real border threedimensional model | |
CN102572505B (en) | System and method for calibrating a depth imaging sensor | |
JP4245963B2 (en) | Method and system for calibrating multiple cameras using a calibration object | |
CN103971404B (en) | 3D real-scene copying device having high cost performance | |
Yu et al. | Extracting objects from range and radiance images | |
CN105869160A (en) | Method and system for implementing 3D modeling and holographic display by using Kinect | |
US10755433B2 (en) | Method and system for scanning an object using an RGB-D sensor | |
CN110478892A (en) | A kind of method and system of three-dimension interaction | |
Pan et al. | Dense 3D reconstruction combining depth and RGB information | |
CN108603936A (en) | Laser scanning system, Laser Scanning, mobile laser scanning system and program | |
Zhu et al. | Video-based outdoor human reconstruction | |
Andreasson et al. | 6D scan registration using depth-interpolated local image features | |
Li et al. | Research on the calibration technology of an underwater camera based on equivalent focal length | |
Takai et al. | Difference sphere: an approach to near light source estimation | |
CN109613974A (en) | A kind of AR household experiential method under large scene | |
CN117036612A (en) | Three-dimensional reconstruction method based on nerve radiation field | |
CN114140539A (en) | Method and device for acquiring position of indoor object | |
Saval-Calvo et al. | μ-MAR: multiplane 3D marker based registration for depth-sensing cameras | |
CN109345570B (en) | Multi-channel three-dimensional color point cloud registration method based on geometric shape | |
Radanovic et al. | Aligning the real and the virtual world: Mixed reality localisation using learning-based 3D–3D model registration | |
Niu et al. | The line scan camera calibration based on space rings group | |
Malleson et al. | Single-view RGBD-based reconstruction of dynamic human geometry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181106 |