CN103247075A - Variational mechanism-based indoor scene three-dimensional reconstruction method - Google Patents

Variational mechanism-based indoor scene three-dimensional reconstruction method Download PDF

Info

Publication number
CN103247075A
CN103247075A CN201310173608XA CN201310173608A CN103247075A CN 103247075 A CN103247075 A CN 103247075A CN 201310173608X A CN201310173608X A CN 201310173608XA CN 201310173608 A CN201310173608 A CN 201310173608A CN 103247075 A CN103247075 A CN 103247075A
Authority
CN
China
Prior art keywords
camera
formula
current
algorithm
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310173608XA
Other languages
Chinese (zh)
Other versions
CN103247075B (en
Inventor
贾松敏
王可
李雨晨
李秀智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201310173608.XA priority Critical patent/CN103247075B/en
Publication of CN103247075A publication Critical patent/CN103247075A/en
Application granted granted Critical
Publication of CN103247075B publication Critical patent/CN103247075B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to crossing field of computer vision and intelligent robots and discloses a variational mechanism-based large-area indoor scene reconstruction method. The method comprises the following steps: step 1, acquiring calibration parameters of a camera, and building an aberration correcting model; step 2, building a camera position and gesture depiction and camera projection model; step 3, utilizing an SFM-based monocular SFM (Space Frequency Modulation) algorithm to realize camera position and gesture estimation; step 4, building a variational mechanism-based depth map estimation model, and performing solving on the model; and step 5, building a key frame selection mechanism to realize three-dimensional scene renewal. According to the invention, an RGB (Red Green Blue) camera is adopted to acquire environmental data, and a variational mechanism-based depth map generation method is proposed through utilizing a high-precision monocular positioning algorithm, so that quick large-area indoor three-dimensional scene reconstruction is realized, and problems of three-dimensional reconstruction algorithm cost and real-time performance are effectively solved.

Description

Indoor environment three-dimensional rebuilding method based on variation mechanism
Technical field
The invention belongs to the crossing domain of computer vision and intelligent robot, relate to a kind of indoor environment three-dimensional reconstruction technology, relate in particular to a kind of method for reconstructing of the indoor scene on a large scale based on variation mechanism.
Technical background
(modeling of environment 3 D stereo progressively becomes this area research focus, causes numerous scholars' concern for Simultaneous Localization And Mapping, SLAM) deepening continuously of research with map building along with locating simultaneously.G.Klein equals at first to propose to locate simultaneously in augmented reality (AR) field in 2007 that (Parallel Tracking and Mapping, concept PTAM) is to solve environment Real-time modeling set problem with map building.PTAM follows the tracks of video camera with the map generation and is divided into two separate threads, when utilizing the FastCorner method to upgrade detected characteristics point, (Bundle Adjustment BA), constantly realizes the renewal of camera pose and three-dimensional feature point map to adopt optimum part and overall light beam method of adjustment.This method has been set up the environment three-dimensional map based on sparse some cloud, but this map lacks the three-dimensional description directly perceived to environment.People such as Pollefeys have realized the three-dimensional reconstruction of large-scale outdoor scene by Multi-sensor Fusion.But there is the high complexity that calculates in this method and to shortcomings such as noise sensitivities.In the progress that some trial property have also been arranged aspect real-time follow-up and the dense environmental model reconstruct, still only be confined to the reconstruct of some simple objects, and can only under the particular constraints condition, can obtain higher precision at present.People such as Richard A.Newcombe, utilization is based on SFM(Structure from Moving) the SLAM algorithm obtain space sparse features point cloud, adopt multiple dimensioned radially basic interpolation, use Implicit Surface Polygonization method in graph image, structure three dimensions initialization grid map, and in conjunction with scene flows constraint and high precision TV-L1 optical flow algorithm renewal grid vertex coordinate, to reach the purpose of approaching real scene.This algorithm can obtain high-precision environmental model, but because its algorithm complex is higher, under two graphic hardware processors (GPU) acceleration situation, handles the time that a two field picture still need spend several seconds.
Summary of the invention
At the above-mentioned problems in the prior art, the invention provides a kind of quick three-dimensional reconstructing method based on variation mechanism, to be implemented in the three-dimensional modeling under the indoor complex environment.This method has reduced required deal with data amount when guaranteeing environmental information, can realize quick indoor 3 D scene rebuilding on a large scale.Solve three-dimensional reconstruction algorithm cost and real-time problem effectively, improved the reconstruction precision.
The technical solution used in the present invention is as follows:
Utilize the PTAM algorithm to estimate means as the camera pose, and choose suitable image sequence structure at the key frame place based on the depth map estimated energy function of variation pattern, use primal dual algorithm to optimize above-mentioned energy function, be implemented in obtaining of current key frame place environment depth map.Because this algorithm utilizes contiguous frames information structuring energy function, and effectively utilized the relevance between the certain viewing angles coordinate system, and the translating camera perspective projection relation, make data item contain and look the imaging constraint more, reduced the computation complexity that algorithm model is found the solution.Under the unified calculation framework, the present invention utilizes graphics accelerator hardware to realize the parallel optimization of algorithm, has effectively improved the algorithm real-time.
A kind of method of the indoor environment three-dimensional reconstruction based on variation mechanism is characterized in that may further comprise the steps:
Step 1 is obtained the calibrating parameters of camera, and sets up the distortion correction model.
In computer vision is used, by the geometric model of camera imaging, effectively set up the mapping relations between the pixel and space three-dimensional point in the image.The geometric parameter that constitutes camera model must just can obtain with calculating by experiment, and the process of finding the solution above-mentioned parameter just is referred to as camera calibration.The demarcation of camera parameter in the present invention is unusual the key link, and the precision of calibrating parameters directly influences the accuracy of net result three-dimensional map.
The detailed process of camera calibration is:
(1) prints a chessboard template.The present invention adopts an A4 paper, chessboard be spaced apart 0.25cm.
(2) from a plurality of angle shot chessboards.During shooting, should allow chessboard take screen, and each angle that guarantees chessboard be taken 6 template picture altogether all in screen as far as possible.
(3) detect unique point in the image, i.e. each black point of crossing of chessboard.
(4) ask for the inner parameter of camera, method is as follows:
RGB camera calibration parameter is mainly the camera confidential reference items.The confidential reference items matrix K of camera is:
K = f u 0 u 0 0 f v v 0 0 0 1
In the formula, u, v are the camera plane coordinate axis, (u 0, v 0) be that camera is as planar central coordinate, (f u, f v) be the focal length of camera.
According to calibrating parameters, the mapping relations of RGB image mid point and three dimensions point are as follows: RGB image mid point p=(u, v) the coordinate P under camera coordinates system 3D=(x, y z) are expressed as:
x = ( u - u 0 ) * z / f u y = ( v - v 0 ) * z / f v z = d
In the formula, d represents the depth value of depth image mid point p.
Camera coordinates system is downwards y axle positive dirction as shown in Figure 2 among the present invention, is forward z axle positive dirction, is to the right the x positive dirction.The initial point position of camera is set at the world coordinate system initial point, and the X of world coordinate system, Y, Z direction are identical with the definition of camera.
FOV(Field of Viewer) the camera correction model is:
u d = u 0 v 0 + f u 0 0 f v r d r u x u
r d = 1 ω arctan ( 2 r u tan ω 2 )
r u = tan ( r d ω ) 2 tan ω 2
In the formula, x uBe the pixel coordinate of z=1 face, u dBe pixel coordinate in the original image, ω is FOV camera distortion factor.
Step 2 is set up the camera pose and is described and the camera projection model.
Under the world coordinate system of having set up, the camera pose can be expressed as matrix:
T cw = R cw t cw 0 1
In the formula, " cw " expression is tied to current camera coordinates from world coordinates and is T Cw∈ SE (3), SE (3) are the rotation translation transformation space of rigid body.T CwCan be by following hexa-atomic group of μ=(μ 1, μ 2, μ 3, μ 4, μ 5, μ 6) expression, that is:
T cw = exp ( μ ^ )
u ^ = 0 μ 6 - μ 5 μ 1 μ 6 0 μ 4 μ 2 μ 5 - μ 4 0 μ 3 0 0 0 0
In the formula, μ 1, μ 2, μ 3Be respectively the translational movement of Kinect under global coordinate system, μ 4, μ 5, μ 6The rotation amount of coordinate axis under the expression local coordinate system.
The pose T of camera CwSet up spatial point cloud coordinate p under the current coordinate system cTo world coordinates p wTransformation relation, that is:
p c=T cwp w
Under current mark system, the projection to the z=1 plane of three dimensions point cloud is defined as:
π(p)=(xz,yz) T
In the formula, p ∈ R 3Be the three dimensions point, x, y, z are the coordinate figure of this point.According to current coordinate points depth value d, utilize reverse sciagraphy to determine current space three-dimensional point coordinate p, its coordinate relation can be expressed as:
π -1(u,d)=dK -1u
Step 3 is utilized based on the monocular SLAM algorithm of SFM and is realized the estimation of camera pose.
At present, monocular vision SLAM algorithm mainly comprises the SLAM algorithm based on filtering and SFM (Structure from Moving).The present invention adopts the realization of PTAM algorithm to the location of camera.This algorithm is a kind of monocular vision SLAM method based on SFM, by being that camera is followed the tracks of and two of map buildings thread independently with system divides.In the camera track thread, system utilizes camera to obtain the current environment texture information, and make up this image pyramid of four floor heights, and use the FAST-10 Corner Detection Algorithm to extract characteristic information in the present image, the mode of employing piece coupling is set up the data association between the angle point feature.On this basis, according to current projection error, set up the accurate location that the pose estimation model is realized camera, and generate current three-dimensional point cloud map in conjunction with characteristic matching information and triangulation algorithm.The detailed process that the camera pose is estimated is:
(1) initialization of sparse map
The PTAM algorithm utilizes standard stereoscopic camera algorithm model to set up current environment initialization map, and brings in constant renewal in three-dimensional map in conjunction with increasing key frame newly on this basis.In the initialization procedure of map, by two independent key frames of artificial selection, utilize FAST corners Matching relation in the image, employing based on the stochastic sampling consistance (Random Sample Consensus, five-spot RANSAC) realizes that important matrix F is estimated between above-mentioned key frame, and calculates the three-dimensional coordinate at current unique point place, simultaneously, set up current consistance plane in conjunction with the suitable spatial point of RANSAC algorithm picks, to determine overall world coordinate system, realize the initialization of map.
(2) the camera pose is estimated
System utilizes camera to obtain the current environment texture information, and makes up this image pyramid of four floor heights, uses the FAST-10 Corner Detection Algorithm to extract characteristic information in the present image, adopts the mode of piece coupling to set up data association between the angle point feature.On this basis, according to current projection error, set up the pose estimation model, its mathematical description is as follows:
ξ = arg min ξ ΣObj ( | e j | σ j , σ T )
e j = u i v i - Kπ ( exp ( ξ ^ ) p )
In the formula, e jBe projection error, ∑ Obj (, σ T) be the two power of Tukey objective function, σ TBe the unbiased estimator of the match-on criterion difference of unique point, ξ is current pose 6 element group representations,
Figure BDA00003178676000043
Be the antisymmetric matrix of being formed by ξ.
According to above-mentioned pose estimation model, choose 50 characteristic matching points that are positioned at the image pyramid top layer, realize the initialization pose of camera is estimated.Further, the initial pose of this algorithm combining camera adopts polar curve to receive the mode of rope, sets up angle point feature sub-pixel precision matching relationship in the image pyramid, and with above-mentioned coupling to bringing the pose estimation model into, realize the accurate reorientation of camera.
(3) the camera pose is optimized
System is after initialization, and the map building thread will wait for that new key frame enters.If number of image frames exceeds threshold condition between camera and current key frame, and the camera tracking effect will automatically perform the key frame process of adding when best.At this moment, system will all FAST angle points carry out the Shi-Tomas assessment in the key frame to increasing newly, to obtain current angle point information with notable feature, and choose nearest with it key frame and utilize polar curve receipts rope and block matching method to set up the unique point mapping relations, realize the accurate reorientation of camera in conjunction with the pose estimation model, simultaneously match point is projected to the space, generate current global context three-dimensional map.
In order to realize the maintenance of global map, in the process that the new key frame of map building thread waits enters, system will utilize local Levenberg-Marquardt boundling adjustment algorithm with the overall situation to realize the consistance optimization of current map.The mathematical description of this boundling adjustment algorithm is:
{ { = ξ 1 . . . ξ N } , { p 1 . . . p M } } = arg min { { μ } , { p } } Σ i = 1 N Σ j ∈ s i Obj ( | e ji | σ ji , σ T )
In the formula, σ JiFor in i key frame, the nothing of the match-on criterion difference of FAST unique point is estimated ξ partially i6 element group representations of representing i key frame pose, p iBe the point in the global map.
Step 4 is set up the depth map estimation model based on variation mechanism, and is found the solution this model.
Estimate to the present invention is based on many apparent weights construction method under the prerequisite at the accurate pose of PTAM, utilize variation mechanism to set up degree of depth solving model.This method is based on illumination unchangeability and depth map smoothness assumption, set up L1 type data penalty term and variation regularization term, this model is by setting up the data penalty term under the prerequisite of illumination unchangeability hypothesis, and utilizes the data penalty term to guarantee the flatness of current depth map, and its mathematical model is as follows:
E d=∫ Ω(E data+λE reg)dx
In the formula, λ is data penalty term E DataWith variation regularization term E RegBetween weight coefficient,
Figure BDA00003178676000054
Be the depth map span.
By choosing the reference frame I that current key frame is the depth map algorithm for estimating r, utilize its adjacent picture sequence I={I 1, I 2..., I n, set up data penalty term E in conjunction with projection model Data, its mathematical description is:
E data = 1 | I ( r ) | Σ I i ∈ I | I r ( x ) - I i ( x ′ ) |
In the formula, | I (r) | for have the image frames numbers of the information of coincidence in the present image sequence with reference frame, x ' is for being in I at reference frame x under depth d iThe projection coordinate at place, that is:
x ′ = π - 1 ( KT r i π ( x , d ) )
Under depth map smoothness assumption prerequisite, in order to ensure the uncontinuity of boundary in image, to introduce Weighted H uber operator and make up the variation regularization term, its mathematical description is:
E reg=g(u)||▽d(u)|| α
In the formula, ▽ d is the gradient of depth map, and g (u) is the pixel gradient weight coefficient, and g (u)=exp (a|| ▽ I r(u) ||)
The Huber operator || x|| αMathematical description be:
| | x | | α = | | x | | 2 2 α , | | x | | ≤ α | | x | | - α 2 , others
In the formula, α is constant.
According to the Legendre-Fenchel conversion, energy function can be expressed as: g | | &dtri; d | | &alpha; = < g &dtri; d , q > - &delta; ( q ) - &alpha; 2 | | q | | 2
In the formula, &delta; ( q ) = &alpha; 2 &alpha; < | | q | | &le; 1 &infin; others
The three-dimensional reconstruction process that is introduced as of above-mentioned Huber operator provides the slickness assurance, also has discontinuous border in the depth map for guaranteeing simultaneously, has improved three-dimensional map and has created quality.
Find the solution complexity height, problem that calculated amount is big at above-mentioned mathematical model, introduce auxiliary variable and set up protruding optimization model, adopting alternately, descent method realizes that to above-mentioned Model Optimization its detailed process is as follows:
(1) fixing h, find the solution:
arg max q { arg min d E d , q }
E d , q = &Integral; &Omega; ( < g &dtri; d , q > + 1 2 &theta; ( d - h ) 2 - &delta; ( q ) - &alpha; 2 | | q | | 2 ) dx
In the formula, θ is the quadratic term constant coefficient, and g is gradient weight coefficient in the variation regularization term.
According to Lagrangian extremum method, the condition that above-mentioned energy function reaches extreme value is:
&PartialD; E d , q &PartialD; q = g &dtri; d - &alpha;q = 0
&PartialD; E d , q &PartialD; d = g div q + 1 &theta; ( d - h ) = 0
In the formula, divq is the divergence of q.
Describe in conjunction with the partial derivative discretize, above-mentioned extremum conditions can be expressed as:
q n + 1 - q n &epsiv; q = g &dtri; d - &alpha;q n + 1
d n + 1 - d n &epsiv; d = g div p + 1 &theta; ( d n + 1 - h )
Can adopt this moment primal dual algorithm to realize the iteration optimization of energy function, that is:
p n + 1 = ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) max ( 1 , ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) )
d n + 1 = d n + &epsiv; d ( g div q n + 1 + h n / &theta; ) ( 1 + &epsiv; d / &theta; )
In the formula, ε q, ε dBe constant, expression maximizes and minimizes gradient and describes coefficient respectively.
(2) fixing d, find the solution:
arg min h E h
E h = &Integral; &Omega; ( &theta; 2 ( d - h ) 2 + &lambda; | I ( r ) | &Sigma; i = 0 n | I i ( x ) - I ref ( x , h ) | ) dx
In above-mentioned energy function solution procedure, in order effectively to reduce the complexity of algorithm, guarantee the part detailed information in the process of reconstruction simultaneously.The present invention is with degree of depth span [d Min, d Max] be divided into S sample plane, adopt exhaustive mode to obtain the optimum solution of current energy function.Being chosen as of step-length wherein:
d inc k = Sd min d max ( S - k ) d min + d max
In the formula, Be k and k-1 sample plane interval.
Step 5 is set up the key frame selection mechanism, realizes the renewal of three-dimensional scenic.
The elimination of taking into account system redundant information, for sharpness and the real-time that improves reconstructed results, the minimizing system is in computation burden, and the present invention only realizes the estimation to three-dimensional scenic at the key frame place, and upgrades and safeguard the three-dimensional scenic that generates.After newly-increased frame KeyFrame data, according to formula
Figure BDA00003178676000077
Current newly-increased KeyFrame data-switching in world coordinate system, is finished the renewal of contextual data.
Utilize data penalty term in the depth model, set up present frame and overlap the degree valuation functions with information between key frame, that is:
N = &Sigma; x &Element; R 2 c ( x )
c ( x ) = 1 , | I r ( x ) - I i ( x &prime; ) | < &zeta; 0 , others
In the formula, ζ is constant.
If N was less than 0.7 o'clock of the image size at this moment, determine that namely present frame is new key frame.
The invention has the beneficial effects as follows: the present invention adopts the RGB camera to obtain environmental data.At utilizing high precision monocular location algorithm, a kind of degree of depth drawing generating method based on variation mechanism is proposed, realized large-scale quick indoor 3 D scene rebuilding, solved three-dimensional reconstruction algorithm cost and real-time problem effectively.
Description of drawings
Fig. 1 is the indoor method for reconstructing three-dimensional scene process flow diagram based on the variation model;
Fig. 2 is synoptic diagram for camera coordinates;
Fig. 3 is the three-dimensional reconstruction experimental result of application example of the present invention.
Embodiment
Fig. 1 is based on the indoor method for reconstructing three-dimensional scene process flow diagram of variation model, may further comprise the steps:
Step 1 is obtained the calibrating parameters of camera, and sets up the distortion correction model.
Step 2 is set up the camera pose and is described and the camera projection model.
Step 3 is utilized based on the monocular SLAM algorithm of SFM and is realized the estimation of camera pose.
Step 4 is set up the depth map estimation model based on variation mechanism, and is found the solution this model.
Step 5 is set up the key frame selection mechanism, realizes the renewal of three-dimensional scenic.
Provide an application example of the present invention below.
The RGB camera that this example adopts is Point Grey Flea2, and image distinguishes that rate is 640 * 480, and the highest frame frequency is 30fps, and the horizontal field of view angle is 65 °, and focal length is approximately 3.5mm.Employed PC is equipped with GTS450GPU and i5 four nuclear CPU.
In experimentation, obtain the environment depth information by color camera, combining camera pose algorithm for estimating is realized self accurate location.After entering key frame, 20 two field pictures are as the input of this paper depth estimation algorithm around the selection key frame.In the depth estimation algorithm implementation, make d 0=h 0And q 0=0, calculate
Figure BDA00003178676000081
Import with the initialization of obtaining current depth map, and iteration optimization E D, qWith E hUp to convergence.Simultaneously, should in the algorithm iteration process, constantly reduce the θ value, increase the weight of quadratic function in the algorithm implementation, effectively improve algorithm the convergence speed.Final experimental result as shown in Figure 3, experiment shows that this method can effectively realize the dense three-dimensional reconstruction of environment, the step of going forward side by side has been demonstrate,proved the feasibility of this method.

Claims (3)

1. method based on the indoor environment three-dimensional reconstruction of variation mechanism is characterized in that may further comprise the steps:
Step 1 is obtained the calibrating parameters of camera, and sets up the distortion correction model;
The detailed process of camera calibration is:
(1) prints a chessboard template;
(2) from a plurality of angle shot chessboards, should allow chessboard take screen, and each angle that guarantees chessboard be taken 6 template picture altogether all in screen as far as possible;
(3) detect unique point in the image, i.e. each black point of crossing of chessboard;
(4) inner parameter of asking for, method is as follows:
RGB camera calibration parameter is mainly the camera confidential reference items, and the confidential reference items matrix K of camera is:
K = f u 0 u 0 0 f v v 0 0 0 1
In the formula, u, v are the camera plane coordinate axis, (u 0, v 0) be that camera is as planar central coordinate, (f u, f v) be the focal length of camera;
According to calibrating parameters, the mapping relations of RGB image mid point and three dimensions point are as follows: RGB image mid point p=(u, v) the coordinate P under camera coordinates system 3D=(x, y z) are expressed as:
x = ( u - u 0 ) * z / f u y = ( v - v 0 ) * z / f v z = d
In the formula, d represents the depth value of depth image mid point p;
Camera coordinates system is y axle positive dirction downwards, is forward z axle positive dirction, is to the right the x positive dirction; The initial point position of camera is set at the world coordinate system initial point, and the X of world coordinate system, Y, Z direction are identical with the definition of camera;
FOV camera correction model is:
u d = u 0 v 0 + f u 0 0 f v r d r u x u
r d = 1 &omega; arctan ( 2 r u tan &omega; 2 )
r u = tan ( r d &omega; ) 2 tan &omega; 2
In the formula, x uBe the pixel coordinate of z=1 face, u dBe pixel coordinate in the original image, ω is FOV camera distortion factor;
Step 2 is set up the camera pose and is described and the camera projection model, and direction is as follows:
Under the world coordinate system of having set up, the camera pose can be expressed as matrix:
T cw = R cw t cw 0 1
In the formula, cw represents that being tied to current camera coordinates from world coordinates is T Cw∈ SE (3), SE (3) are the rotation translation transformation space of rigid body; T CwCan be by following hexa-atomic group of μ=(μ 1, μ 2, μ 3, μ 4, μ 5, μ 6) expression, that is:
T cw = exp ( &mu; ^ )
&mu; ^ = 0 &mu; 6 - &mu; 5 &mu; 1 &mu; 6 0 &mu; 4 &mu; 2 &mu; 5 - &mu; 4 0 &mu; 3 0 0 0 0
In the formula, μ 1, μ 2, μ 3Be respectively the translational movement of Kinect under global coordinate system, μ 4, μ 5, μ 6The rotation amount of coordinate axis under the expression local coordinate system;
The pose T of camera CwSet up spatial point cloud coordinate p under the current coordinate system cTo world coordinates p wTransformation relation, that is:
p c=T cwp w
Under current mark system, the projection to the z=1 plane of three dimensions point cloud is defined as:
π(p)=(xz,yz) T
In the formula, p ∈ R 3Be the three dimensions point, x, y, z are the coordinate figure of this point; According to current coordinate points depth value d, utilize reverse sciagraphy to determine current space three-dimensional point coordinate p, its coordinate relation can be expressed as:
π -1(u,d)=dK -1u
Step 3 is utilized based on the monocular SLAM algorithm of SFM and is realized the estimation of camera pose;
Step 4 is set up the depth map estimation model based on variation mechanism, and is found the solution this model;
Step 5 is set up the key frame selection mechanism, realizes the renewal of three-dimensional scenic, and method is as follows:
Realize the estimation to three-dimensional scenic at the key frame place, and upgrade and safeguard the three-dimensional scenic that generates; After newly-increased frame KeyFrame data, according to formula
Figure FDA00003178675900026
Current newly-increased KeyFrame data-switching in world coordinate system, is finished the renewal of contextual data;
Utilize data penalty term in the depth model, set up present frame and overlap the degree valuation functions with information between key frame, that is:
N = &Sigma; x &Element; R 2 c ( x )
c ( x ) = 1 , | I r ( x ) - I i ( x &prime; ) | < &zeta; 0 , others
In the formula, ζ is constant;
If N was less than 0.7 o'clock of the image size at this moment, determine that namely present frame is new key frame.
2. the method for a kind of indoor environment three-dimensional reconstruction based on variation mechanism according to claim 1 is characterized in that, the step 3 utilization realizes that based on the monocular SLAM algorithm of SFM camera pose estimation approach is further comprising the steps of:
(1) initialization of sparse map
The PTAM algorithm utilizes standard stereoscopic camera algorithm model to set up current environment initialization map, and brings in constant renewal in three-dimensional map in conjunction with increasing key frame newly on this basis; In the initialization procedure of map, by two independent key frames of artificial selection, utilize FAST corners Matching relation in the image, employing realizes the estimation of the important matrix F between above-mentioned key frame based on the conforming five-spot of stochastic sampling, and calculate the three-dimensional coordinate at current unique point place, simultaneously, set up current consistance plane in conjunction with the suitable spatial point of RANSAC algorithm picks, to determine overall world coordinate system, realize the initialization of map;
(2) the camera pose is estimated
System utilizes camera to obtain the current environment texture information, and makes up this image pyramid of four floor heights, uses the FAST-10 Corner Detection Algorithm to extract characteristic information in the present image, and the mode of employing piece coupling is set up the data association between the angle point feature; On this basis, according to current projection error, set up the pose estimation model, its mathematical description is as follows:
&xi; = arg min &xi; &Sigma;Obj ( | e j | &sigma; j , &sigma; T )
e j = u i v i - K&pi; ( exp ( &xi; ) p )
In the formula, e jBe projection error, Σ Obj (, σ T) be the two power of Tukey objective function function, σ TBe the unbiased estimator of the match-on criterion difference of unique point, ξ is current pose 6 element group representations,
Figure FDA00003178675900033
Be the antisymmetric matrix of being formed by ξ;
According to above-mentioned pose estimation model, choose 50 characteristic matching points that are positioned at the image pyramid top layer, realize the initialization pose of camera is estimated; Further, the initial pose of this algorithm combining camera adopts polar curve to receive the mode of rope, sets up angle point feature sub-pixel precision matching relationship in the image pyramid, and with above-mentioned coupling to bringing the pose estimation model into, realize the accurate reorientation of camera;
(3) the camera pose is optimized
System is after initialization, and the key frame that the map building thread waits is new enters; If number of image frames exceeds threshold condition between camera and current key frame, and the camera tracking effect will automatically perform the key frame process of adding when best; At this moment, system will carry out the Shi-Tomas assessment to all FAST angle points in the key frame that increases newly, to obtain current angle point characteristic information with notable feature, and choose nearest with it key frame and utilize polar curve receipts rope and block matching method to set up the unique point mapping relations, realize the accurate reorientation of camera in conjunction with the pose estimation model, simultaneously match point is projected to the space, generate current global context three-dimensional map;
In order to realize the maintenance of global map, in the process that the new key frame of map building thread waits enters, the local Levenberg-Marquardt boundling adjustment algorithm with the overall situation of system's utilization realizes the global coherency optimization of current map; The mathematical description of this boundling adjustment algorithm is:
{ { &xi; 2 . . &xi; N } , { p 1 . . p M } } = arg min { { &mu; } , { p } } &Sigma; i = 1 N &Sigma; j &Element; S i Obj ( | e ji | &sigma; ji , &sigma; T )
In the formula, σ JiFor in i key frame, the nothing of the match-on criterion difference of FAST unique point is estimated ξ partially i6 element group representations of representing i key frame pose, p iBe the point in the global map.
3. the method for a kind of indoor environment three-dimensional reconstruction based on variation mechanism according to claim 1 is characterized in that, step 4 is set up and find the solution based on the method for the depth map estimation model of variation mechanism as follows:
Based on the depth map estimation model of variation mechanism, under the prerequisite of illumination unchangeability hypothesis, to set up the data penalty term, and utilize the data penalty term to guarantee the flatness of current depth map, its mathematical model is as follows:
E d=∫ Ω(E data+λE reg)dx
In the formula, λ is data penalty term E DataWith variation regularization term E RegBetween weight coefficient,
Figure FDA00003178675900045
Be the depth map span;
By choosing the reference frame I that current key frame is the depth map algorithm for estimating r, utilize its adjacent picture sequence I={I 1, I 2..., I n, set up data penalty term E in conjunction with projection model Data, its mathematical description is:
E data = 1 | I ( r ) | &Sigma; I i &Element; I | I r ( x ) - I i ( x &prime; ) |
In the formula, | I (r) | for have the image frames numbers of the information of coincidence in the present image sequence with reference frame, x ' is for being in I at reference frame x under depth d iThe projection coordinate at place, that is:
x &prime; = &pi; - 1 ( KT r i &pi; ( x , d ) )
Under depth map smoothness assumption prerequisite, in order to ensure the uncontinuity of boundary in image, to introduce Weighted H uber operator and make up the variation regularization term, its mathematical description is:
E reg=g(u)||▽d(u)|| α
In the formula, ▽ d is the gradient of depth map, and g (u) is the pixel gradient weight coefficient, g (u)=exp (a|| ▽ I r(u) ||)
The Huber operator || x|| αMathematical description be:
| | x | | &alpha; = | | x | | 2 2 &alpha; , | | x | | &le; &alpha; | | x | | - &alpha; 2 , others
In the formula, α is constant;
According to the Legendre-Fenchel conversion, energy function is transformed to:
g | | &dtri; d | | &alpha; = < g &dtri; d , q > - &delta; ( q ) - &alpha; 2 | | q | | 2
In the formula, &delta; ( q ) = &alpha; 2 &alpha; < | | q | | &le; 1 &infin; other
In view of above-mentioned mathematical model is found the solution the complexity height, calculated amount is big, introduce auxiliary variable and set up protruding optimization model, adopting alternately, descent method realizes that to above-mentioned Model Optimization detailed process is as follows:
(1) fixing h, find the solution:
arg max q { arg min d E d , q }
E d , q = &Integral; &Omega; ( < g &dtri; d , q > + 1 2 &theta; ( d - h ) 2 - &delta; ( q ) - &alpha; 2 | | q | | 2 ) dx
In the formula, g is gradient weight coefficient in the variation regularization term, and θ is the quadratic term constant coefficient;
According to Lagrangian extremum method, the condition that above-mentioned energy function reaches extreme value is:
&PartialD; E d , q &PartialD; q = g &dtri; d - &alpha;q = 0
&PartialD; E d , q &PartialD; d = g div q + 1 &theta; ( d - h ) = 0
In the formula, divq is the divergence of q;
Describe in conjunction with the partial derivative discretize, above-mentioned extremum conditions can be expressed as:
q n + 1 - q n &epsiv; q = g &dtri; d - &alpha;q n + 1
d n + 1 - d n &epsiv; d = g div p + 1 &theta; ( d n + 1 - h )
Adopt primal dual algorithm to realize the iteration optimization of energy function, that is:
p n + 1 = ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) max ( 1 , ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) )
d n + 1 = d n + &epsiv; d ( g div q n + 1 + h n / &theta; ) ( 1 + &epsiv; d / &theta; )
In the formula, ε q, ε dBe constant, expression maximizes and minimizes gradient and describes coefficient respectively;
(2) fixing d, find the solution:
arg min E h h
E h = &Integral; &Omega; ( &theta; 2 ( d - h ) 2 + &lambda; | I ( r ) | &Sigma; i = 0 n | I i ( x ) - I ref ( x , h ) | ) dx
In above-mentioned energy function solution procedure, in order effectively to reduce the complexity of algorithm, guarantee the part detailed information in the process of reconstruction simultaneously, with degree of depth span [d Min, d Max] be divided into S sample plane, adopt exhaustive mode to obtain the optimum solution of current energy function; Being chosen as of step-length wherein:
d inc k = Sd min d max ( S - k ) d min + d max
In the formula,
Figure FDA00003178675900062
Be k and k-1 sample plane interval.
CN201310173608.XA 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism Expired - Fee Related CN103247075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310173608.XA CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310173608.XA CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Publications (2)

Publication Number Publication Date
CN103247075A true CN103247075A (en) 2013-08-14
CN103247075B CN103247075B (en) 2015-08-19

Family

ID=48926580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310173608.XA Expired - Fee Related CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Country Status (1)

Country Link
CN (1) CN103247075B (en)

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103901891A (en) * 2014-04-12 2014-07-02 复旦大学 Dynamic particle tree SLAM algorithm based on hierarchical structure
CN103914874A (en) * 2014-04-08 2014-07-09 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN103942832A (en) * 2014-04-11 2014-07-23 浙江大学 Real-time indoor scene reconstruction method based on on-line structure analysis
CN104427230A (en) * 2013-08-28 2015-03-18 北京大学 Reality enhancement method and reality enhancement system
CN104463962A (en) * 2014-12-09 2015-03-25 合肥工业大学 Three-dimensional scene reconstruction method based on GPS information video
CN104537709A (en) * 2014-12-15 2015-04-22 西北工业大学 Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN104881029A (en) * 2015-05-15 2015-09-02 重庆邮电大学 Mobile robot navigation method based on one point RANSAC and FAST algorithm
WO2015134832A1 (en) * 2014-03-06 2015-09-11 Nec Laboratories America, Inc. High accuracy monocular moving object localization
CN105513083A (en) * 2015-12-31 2016-04-20 新浪网技术(中国)有限公司 PTAM camera tracking method and device
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN105678842A (en) * 2016-01-11 2016-06-15 湖南拓视觉信息技术有限公司 Manufacturing method and device for three-dimensional map of indoor environment
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105686936A (en) * 2016-01-12 2016-06-22 浙江大学 Sound coding interaction system based on RGB-IR camera
CN105825520A (en) * 2015-01-08 2016-08-03 北京雷动云合智能技术有限公司 Monocular SLAM (Simultaneous Localization and Mapping) method capable of creating large-scale map
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106097304A (en) * 2016-05-31 2016-11-09 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106289099A (en) * 2016-07-28 2017-01-04 汕头大学 A kind of single camera vision system and three-dimensional dimension method for fast measuring based on this system
CN106485744A (en) * 2016-10-10 2017-03-08 成都奥德蒙科技有限公司 A kind of synchronous superposition method
CN106529838A (en) * 2016-12-16 2017-03-22 湖南拓视觉信息技术有限公司 Virtual assembling method and device
CN106595601A (en) * 2016-12-12 2017-04-26 天津大学 Camera six-degree-of-freedom pose accurate repositioning method without hand eye calibration
CN106780588A (en) * 2016-12-09 2017-05-31 浙江大学 A kind of image depth estimation method based on sparse laser observations
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN106803275A (en) * 2017-02-20 2017-06-06 苏州中科广视文化科技有限公司 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated
CN106875437A (en) * 2016-12-27 2017-06-20 北京航空航天大学 A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN106875446A (en) * 2017-02-20 2017-06-20 清华大学 Camera method for relocating and device
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN106997614A (en) * 2017-03-17 2017-08-01 杭州光珀智能科技有限公司 A kind of large scale scene 3D modeling method and its device based on depth camera
CN107004275A (en) * 2014-11-21 2017-08-01 Metaio有限公司 For determining that at least one of 3D in absolute space ratio of material object reconstructs the method and system of the space coordinate of part
CN107160395A (en) * 2017-06-07 2017-09-15 中国人民解放军装甲兵工程学院 Map constructing method and robot control system
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107481279A (en) * 2017-05-18 2017-12-15 华中科技大学 A kind of monocular video depth map computational methods
CN107506040A (en) * 2017-08-29 2017-12-22 上海爱优威软件开发有限公司 A kind of space path method and system for planning
CN107657640A (en) * 2017-09-30 2018-02-02 南京大典科技有限公司 Intelligent patrol inspection management method based on ORB SLAM
CN107818592A (en) * 2017-11-24 2018-03-20 北京华捷艾米科技有限公司 Method, system and the interactive system of collaborative synchronous superposition
CN107833245A (en) * 2017-11-28 2018-03-23 北京搜狐新媒体信息技术有限公司 SLAM method and system based on monocular vision Feature Points Matching
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN107909643A (en) * 2017-11-06 2018-04-13 清华大学 Mixing scene reconstruction method and device based on model segmentation
CN108062537A (en) * 2017-12-29 2018-05-22 幻视信息科技(深圳)有限公司 A kind of 3d space localization method, device and computer readable storage medium
CN108122263A (en) * 2017-04-28 2018-06-05 上海联影医疗科技有限公司 Image re-construction system and method
CN108154531A (en) * 2018-01-03 2018-06-12 深圳北航新兴产业技术研究院 A kind of method and apparatus for calculating body-surface rauma region area
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108242079A (en) * 2017-12-30 2018-07-03 北京工业大学 A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN108629843A (en) * 2017-03-24 2018-10-09 成都理想境界科技有限公司 A kind of method and apparatus for realizing augmented reality
CN108898669A (en) * 2018-07-17 2018-11-27 网易(杭州)网络有限公司 Data processing method, device, medium and calculating equipment
WO2018214086A1 (en) * 2017-05-25 2018-11-29 深圳先进技术研究院 Method and apparatus for three-dimensional reconstruction of scene, and terminal device
CN109191526A (en) * 2018-09-10 2019-01-11 杭州艾米机器人有限公司 Three-dimensional environment method for reconstructing and system based on RGBD camera and optical encoder
CN109254579A (en) * 2017-07-14 2019-01-22 上海汽车集团股份有限公司 A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN109739079A (en) * 2018-12-25 2019-05-10 广东工业大学 A method of improving VSLAM system accuracy
CN109870118A (en) * 2018-11-07 2019-06-11 南京林业大学 A kind of point cloud acquisition method of Oriented Green plant temporal model
CN110059651A (en) * 2019-04-24 2019-07-26 北京计算机技术及应用研究所 A kind of camera real-time tracking register method
CN110555883A (en) * 2018-04-27 2019-12-10 腾讯科技(深圳)有限公司 repositioning method and device for camera attitude tracking process and storage medium
CN110751640A (en) * 2019-10-17 2020-02-04 南京鑫和汇通电子科技有限公司 Quadrangle detection method of depth image based on angular point pairing
CN110966917A (en) * 2018-09-29 2020-04-07 深圳市掌网科技股份有限公司 Indoor three-dimensional scanning system and method for mobile terminal
CN111145238A (en) * 2019-12-12 2020-05-12 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
CN111340864A (en) * 2020-02-26 2020-06-26 浙江大华技术股份有限公司 Monocular estimation-based three-dimensional scene fusion method and device
CN111652901A (en) * 2020-06-02 2020-09-11 山东大学 Texture-free three-dimensional object tracking method based on confidence coefficient and feature fusion
CN112221132A (en) * 2020-10-14 2021-01-15 王军力 Method and system for applying three-dimensional weiqi to online game
CN112348869A (en) * 2020-11-17 2021-02-09 的卢技术有限公司 Method for recovering monocular SLAM scale through detection and calibration
CN112348868A (en) * 2020-11-06 2021-02-09 养哇(南京)科技有限公司 Method and system for recovering monocular SLAM scale through detection and calibration
CN112597334A (en) * 2021-01-15 2021-04-02 天津帕克耐科技有限公司 Data processing method of communication data center
CN112634371A (en) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 Method and device for outputting information and calibrating camera
CN113034606A (en) * 2021-02-26 2021-06-25 嘉兴丰鸟科技有限公司 Motion recovery structure calculation method
CN113902847A (en) * 2021-10-11 2022-01-07 岱悟智能科技(上海)有限公司 Monocular depth image pose optimization method based on three-dimensional feature constraint
US11348260B2 (en) * 2017-06-22 2022-05-31 Interdigital Vc Holdings, Inc. Methods and devices for encoding and reconstructing a point cloud
WO2022142049A1 (en) * 2020-12-29 2022-07-07 浙江商汤科技开发有限公司 Map construction method and apparatus, device, storage medium, and computer program product
CN117214860A (en) * 2023-08-14 2023-12-12 北京科技大学顺德创新学院 Laser radar odometer method based on twin feature pyramid and ground segmentation

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701811B (en) * 2016-01-12 2018-05-22 浙江大学 A kind of acoustic coding exchange method based on RGB-IR cameras
CN108645398A (en) * 2018-02-09 2018-10-12 深圳积木易搭科技技术有限公司 A kind of instant positioning and map constructing method and system based on structured environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07182541A (en) * 1993-12-21 1995-07-21 Nec Corp Preparing method for three-dimensional model
CN101369348A (en) * 2008-11-07 2009-02-18 上海大学 Novel sight point reconstruction method for multi-sight point collection/display system of convergence type camera
CN102800127A (en) * 2012-07-18 2012-11-28 清华大学 Light stream optimization based three-dimensional reconstruction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07182541A (en) * 1993-12-21 1995-07-21 Nec Corp Preparing method for three-dimensional model
CN101369348A (en) * 2008-11-07 2009-02-18 上海大学 Novel sight point reconstruction method for multi-sight point collection/display system of convergence type camera
CN102800127A (en) * 2012-07-18 2012-11-28 清华大学 Light stream optimization based three-dimensional reconstruction method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TAGUCHI.Y ,ETAL: "SLAM using both points and planes for hand-held 3D sensors", 《MIXED AND AUGMENTED REALITY (ISMAR), 2012 IEEE INTERNATIONAL SYMPOSIUM ON》 *
刘鑫,等: "基于GPU和Kinect的快速物体重建", 《自动化学报》 *

Cited By (122)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427230A (en) * 2013-08-28 2015-03-18 北京大学 Reality enhancement method and reality enhancement system
CN104427230B (en) * 2013-08-28 2017-08-25 北京大学 The method of augmented reality and the system of augmented reality
US9367922B2 (en) 2014-03-06 2016-06-14 Nec Corporation High accuracy monocular moving object localization
WO2015134832A1 (en) * 2014-03-06 2015-09-11 Nec Laboratories America, Inc. High accuracy monocular moving object localization
WO2015154601A1 (en) * 2014-04-08 2015-10-15 中山大学 Non-feature extraction-based dense sfm three-dimensional reconstruction method
CN103914874A (en) * 2014-04-08 2014-07-09 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN103914874B (en) * 2014-04-08 2017-02-01 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
US9686527B2 (en) 2014-04-08 2017-06-20 Sun Yat-Sen University Non-feature extraction-based dense SFM three-dimensional reconstruction method
CN103942832B (en) * 2014-04-11 2016-07-06 浙江大学 A kind of indoor scene real-time reconstruction method based on online structural analysis
CN103942832A (en) * 2014-04-11 2014-07-23 浙江大学 Real-time indoor scene reconstruction method based on on-line structure analysis
CN103901891A (en) * 2014-04-12 2014-07-02 复旦大学 Dynamic particle tree SLAM algorithm based on hierarchical structure
CN107004275A (en) * 2014-11-21 2017-08-01 Metaio有限公司 For determining that at least one of 3D in absolute space ratio of material object reconstructs the method and system of the space coordinate of part
CN107004275B (en) * 2014-11-21 2020-09-29 苹果公司 Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
US10846871B2 (en) 2014-11-21 2020-11-24 Apple Inc. Method and system for determining spatial coordinates of a 3D reconstruction of at least part of a real object at absolute spatial scale
US11741624B2 (en) 2014-11-21 2023-08-29 Apple Inc. Method and system for determining spatial coordinates of a 3D reconstruction of at least part of a real object at absolute spatial scale
CN104463962A (en) * 2014-12-09 2015-03-25 合肥工业大学 Three-dimensional scene reconstruction method based on GPS information video
CN104463962B (en) * 2014-12-09 2017-02-22 合肥工业大学 Three-dimensional scene reconstruction method based on GPS information video
CN104537709B (en) * 2014-12-15 2017-09-29 西北工业大学 It is a kind of that method is determined based on the real-time three-dimensional reconstruction key frame that pose changes
CN104537709A (en) * 2014-12-15 2015-04-22 西北工业大学 Real-time three-dimensional reconstruction key frame determination method based on position and orientation changes
CN105825520A (en) * 2015-01-08 2016-08-03 北京雷动云合智能技术有限公司 Monocular SLAM (Simultaneous Localization and Mapping) method capable of creating large-scale map
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN104881029A (en) * 2015-05-15 2015-09-02 重庆邮电大学 Mobile robot navigation method based on one point RANSAC and FAST algorithm
CN104881029B (en) * 2015-05-15 2018-01-30 重庆邮电大学 Mobile Robotics Navigation method based on a point RANSAC and FAST algorithms
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN105654492B (en) * 2015-12-30 2018-09-07 哈尔滨工业大学 Robust real-time three-dimensional method for reconstructing based on consumer level camera
CN105513083A (en) * 2015-12-31 2016-04-20 新浪网技术(中国)有限公司 PTAM camera tracking method and device
CN105513083B (en) * 2015-12-31 2019-02-22 新浪网技术(中国)有限公司 A kind of PTAM video camera tracking method and device
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105678842A (en) * 2016-01-11 2016-06-15 湖南拓视觉信息技术有限公司 Manufacturing method and device for three-dimensional map of indoor environment
CN105686936A (en) * 2016-01-12 2016-06-22 浙江大学 Sound coding interaction system based on RGB-IR camera
CN105686936B (en) * 2016-01-12 2017-12-29 浙江大学 A kind of acoustic coding interactive system based on RGB-IR cameras
CN105928505B (en) * 2016-04-19 2019-01-29 深圳市神州云海智能科技有限公司 The pose of mobile robot determines method and apparatus
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
CN105856230B (en) * 2016-05-06 2017-11-24 简燕梅 A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106052674B (en) * 2016-05-20 2019-07-26 青岛克路德机器人有限公司 A kind of SLAM method and system of Indoor Robot
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106097304A (en) * 2016-05-31 2016-11-09 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method
CN106097304B (en) * 2016-05-31 2019-04-23 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method
CN106127739B (en) * 2016-06-16 2021-04-27 华东交通大学 Monocular vision combined RGB-D SLAM method
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106289099A (en) * 2016-07-28 2017-01-04 汕头大学 A kind of single camera vision system and three-dimensional dimension method for fast measuring based on this system
CN106289099B (en) * 2016-07-28 2018-11-20 汕头大学 A kind of single camera vision system and the three-dimensional dimension method for fast measuring based on the system
CN106485744B (en) * 2016-10-10 2019-08-20 成都弥知科技有限公司 A kind of synchronous superposition method
CN106485744A (en) * 2016-10-10 2017-03-08 成都奥德蒙科技有限公司 A kind of synchronous superposition method
CN106780576B (en) * 2016-11-23 2020-03-17 北京航空航天大学 RGBD data stream-oriented camera pose estimation method
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN106780588A (en) * 2016-12-09 2017-05-31 浙江大学 A kind of image depth estimation method based on sparse laser observations
CN106595601A (en) * 2016-12-12 2017-04-26 天津大学 Camera six-degree-of-freedom pose accurate repositioning method without hand eye calibration
CN106595601B (en) * 2016-12-12 2020-01-07 天津大学 Accurate repositioning method for camera pose with six degrees of freedom without hand-eye calibration
CN106529838A (en) * 2016-12-16 2017-03-22 湖南拓视觉信息技术有限公司 Virtual assembling method and device
CN106875437A (en) * 2016-12-27 2017-06-20 北京航空航天大学 A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN106940186B (en) * 2017-02-16 2019-09-24 华中科技大学 A kind of robot autonomous localization and navigation methods and systems
CN106875446B (en) * 2017-02-20 2019-09-20 清华大学 Camera method for relocating and device
CN106803275A (en) * 2017-02-20 2017-06-06 苏州中科广视文化科技有限公司 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated
CN106875446A (en) * 2017-02-20 2017-06-20 清华大学 Camera method for relocating and device
CN106997614A (en) * 2017-03-17 2017-08-01 杭州光珀智能科技有限公司 A kind of large scale scene 3D modeling method and its device based on depth camera
CN108629843B (en) * 2017-03-24 2021-07-13 成都理想境界科技有限公司 Method and equipment for realizing augmented reality
CN108629843A (en) * 2017-03-24 2018-10-09 成都理想境界科技有限公司 A kind of method and apparatus for realizing augmented reality
CN108122263A (en) * 2017-04-28 2018-06-05 上海联影医疗科技有限公司 Image re-construction system and method
US11062487B2 (en) 2017-04-28 2021-07-13 Shanghai United Imaging Healthcare Co., Ltd. System and method for image reconstruction
CN108122263B (en) * 2017-04-28 2021-06-25 上海联影医疗科技股份有限公司 Image reconstruction system and method
US11455756B2 (en) 2017-04-28 2022-09-27 Shanghai United Imaging Healthcare Co., Ltd. System and method for image reconstruction
CN107481279A (en) * 2017-05-18 2017-12-15 华中科技大学 A kind of monocular video depth map computational methods
CN107292949A (en) * 2017-05-25 2017-10-24 深圳先进技术研究院 Three-dimensional rebuilding method, device and the terminal device of scene
CN107292949B (en) * 2017-05-25 2020-06-16 深圳先进技术研究院 Three-dimensional reconstruction method and device of scene and terminal equipment
WO2018214086A1 (en) * 2017-05-25 2018-11-29 深圳先进技术研究院 Method and apparatus for three-dimensional reconstruction of scene, and terminal device
CN107160395A (en) * 2017-06-07 2017-09-15 中国人民解放军装甲兵工程学院 Map constructing method and robot control system
US11348260B2 (en) * 2017-06-22 2022-05-31 Interdigital Vc Holdings, Inc. Methods and devices for encoding and reconstructing a point cloud
CN109254579A (en) * 2017-07-14 2019-01-22 上海汽车集团股份有限公司 A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method
CN107506040A (en) * 2017-08-29 2017-12-22 上海爱优威软件开发有限公司 A kind of space path method and system for planning
CN107657640A (en) * 2017-09-30 2018-02-02 南京大典科技有限公司 Intelligent patrol inspection management method based on ORB SLAM
CN107909643B (en) * 2017-11-06 2020-04-24 清华大学 Mixed scene reconstruction method and device based on model segmentation
CN107909643A (en) * 2017-11-06 2018-04-13 清华大学 Mixing scene reconstruction method and device based on model segmentation
CN107862720B (en) * 2017-11-24 2020-05-22 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on multi-map fusion
CN107862720A (en) * 2017-11-24 2018-03-30 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on the fusion of more maps
CN107818592A (en) * 2017-11-24 2018-03-20 北京华捷艾米科技有限公司 Method, system and the interactive system of collaborative synchronous superposition
CN107833245A (en) * 2017-11-28 2018-03-23 北京搜狐新媒体信息技术有限公司 SLAM method and system based on monocular vision Feature Points Matching
CN107833245B (en) * 2017-11-28 2020-02-07 北京搜狐新媒体信息技术有限公司 Monocular visual feature point matching-based SLAM method and system
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108062537A (en) * 2017-12-29 2018-05-22 幻视信息科技(深圳)有限公司 A kind of 3d space localization method, device and computer readable storage medium
CN108242079B (en) * 2017-12-30 2021-06-25 北京工业大学 VSLAM method based on multi-feature visual odometer and graph optimization model
CN108242079A (en) * 2017-12-30 2018-07-03 北京工业大学 A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model
CN108154531B (en) * 2018-01-03 2021-10-08 深圳北航新兴产业技术研究院 Method and device for calculating area of body surface damage region
CN108154531A (en) * 2018-01-03 2018-06-12 深圳北航新兴产业技术研究院 A kind of method and apparatus for calculating body-surface rauma region area
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN110555883B (en) * 2018-04-27 2022-07-22 腾讯科技(深圳)有限公司 Repositioning method and device for camera attitude tracking process and storage medium
CN110555883A (en) * 2018-04-27 2019-12-10 腾讯科技(深圳)有限公司 repositioning method and device for camera attitude tracking process and storage medium
CN108898669A (en) * 2018-07-17 2018-11-27 网易(杭州)网络有限公司 Data processing method, device, medium and calculating equipment
CN109191526A (en) * 2018-09-10 2019-01-11 杭州艾米机器人有限公司 Three-dimensional environment method for reconstructing and system based on RGBD camera and optical encoder
CN109191526B (en) * 2018-09-10 2020-07-07 杭州艾米机器人有限公司 Three-dimensional environment reconstruction method and system based on RGBD camera and optical encoder
CN110966917A (en) * 2018-09-29 2020-04-07 深圳市掌网科技股份有限公司 Indoor three-dimensional scanning system and method for mobile terminal
CN109870118B (en) * 2018-11-07 2020-09-11 南京林业大学 Point cloud collection method for green plant time sequence model
CN109870118A (en) * 2018-11-07 2019-06-11 南京林业大学 A kind of point cloud acquisition method of Oriented Green plant temporal model
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN109697753B (en) * 2018-12-10 2023-10-03 智灵飞(北京)科技有限公司 Unmanned aerial vehicle three-dimensional reconstruction method based on RGB-D SLAM and unmanned aerial vehicle
CN109739079B (en) * 2018-12-25 2022-05-10 九天创新(广东)智能科技有限公司 Method for improving VSLAM system precision
CN109739079A (en) * 2018-12-25 2019-05-10 广东工业大学 A method of improving VSLAM system accuracy
CN110059651B (en) * 2019-04-24 2021-07-02 北京计算机技术及应用研究所 Real-time tracking and registering method for camera
CN110059651A (en) * 2019-04-24 2019-07-26 北京计算机技术及应用研究所 A kind of camera real-time tracking register method
CN112634371A (en) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 Method and device for outputting information and calibrating camera
CN112634371B (en) * 2019-09-24 2023-12-15 阿波罗智联(北京)科技有限公司 Method and device for outputting information and calibrating camera
CN110751640A (en) * 2019-10-17 2020-02-04 南京鑫和汇通电子科技有限公司 Quadrangle detection method of depth image based on angular point pairing
WO2021115071A1 (en) * 2019-12-12 2021-06-17 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and apparatus for monocular endoscope image, and terminal device
CN111145238A (en) * 2019-12-12 2020-05-12 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
CN111145238B (en) * 2019-12-12 2023-09-22 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN111340864A (en) * 2020-02-26 2020-06-26 浙江大华技术股份有限公司 Monocular estimation-based three-dimensional scene fusion method and device
CN111340864B (en) * 2020-02-26 2023-12-12 浙江大华技术股份有限公司 Three-dimensional scene fusion method and device based on monocular estimation
CN111652901B (en) * 2020-06-02 2021-03-26 山东大学 Texture-free three-dimensional object tracking method based on confidence coefficient and feature fusion
CN111652901A (en) * 2020-06-02 2020-09-11 山东大学 Texture-free three-dimensional object tracking method based on confidence coefficient and feature fusion
CN112221132A (en) * 2020-10-14 2021-01-15 王军力 Method and system for applying three-dimensional weiqi to online game
CN112348868A (en) * 2020-11-06 2021-02-09 养哇(南京)科技有限公司 Method and system for recovering monocular SLAM scale through detection and calibration
CN112348869A (en) * 2020-11-17 2021-02-09 的卢技术有限公司 Method for recovering monocular SLAM scale through detection and calibration
WO2022142049A1 (en) * 2020-12-29 2022-07-07 浙江商汤科技开发有限公司 Map construction method and apparatus, device, storage medium, and computer program product
CN112597334A (en) * 2021-01-15 2021-04-02 天津帕克耐科技有限公司 Data processing method of communication data center
CN113034606A (en) * 2021-02-26 2021-06-25 嘉兴丰鸟科技有限公司 Motion recovery structure calculation method
CN113902847A (en) * 2021-10-11 2022-01-07 岱悟智能科技(上海)有限公司 Monocular depth image pose optimization method based on three-dimensional feature constraint
CN113902847B (en) * 2021-10-11 2024-04-16 岱悟智能科技(上海)有限公司 Monocular depth image pose optimization method based on three-dimensional feature constraint
CN117214860A (en) * 2023-08-14 2023-12-12 北京科技大学顺德创新学院 Laser radar odometer method based on twin feature pyramid and ground segmentation
CN117214860B (en) * 2023-08-14 2024-04-19 北京科技大学顺德创新学院 Laser radar odometer method based on twin feature pyramid and ground segmentation

Also Published As

Publication number Publication date
CN103247075B (en) 2015-08-19

Similar Documents

Publication Publication Date Title
CN103247075B (en) Based on the indoor environment three-dimensional rebuilding method of variation mechanism
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
CN103400409B (en) A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
US9613420B2 (en) Method for locating a camera and for 3D reconstruction in a partially known environment
Turner et al. Fast, automated, scalable generation of textured 3D models of indoor environments
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN112001926B (en) RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping
CN106485675B (en) A kind of scene flows estimation method smooth based on 3D local stiffness and depth map guidance anisotropy
CN108564616A (en) Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN106960442A (en) Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN105809687A (en) Monocular vision ranging method based on edge point information in image
Pretto et al. Omnidirectional dense large-scale mapping and navigation based on meaningful triangulation
CN108133496B (en) Dense map creation method based on g2o and random fern algorithm
GB2580691A (en) Depth estimation
CN103260008B (en) A kind of image position is to the projection conversion method of physical location
Liu et al. Dense stereo matching strategy for oblique images that considers the plane directions in urban areas
CN114529681A (en) Hand-held double-camera building temperature field three-dimensional model construction method and system
CN102663812A (en) Direct method of three-dimensional motion detection and dense structure reconstruction based on variable optical flow
CN106408654B (en) A kind of creation method and system of three-dimensional map
Jacquet et al. Real-world normal map capture for nearly flat reflective surfaces
Kurz et al. Bundle adjustment for stereoscopic 3d
Chen et al. Densefusion: Large-scale online dense pointcloud and dsm mapping for uavs

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150819

Termination date: 20200513

CF01 Termination of patent right due to non-payment of annual fee