CN107527366A - A kind of camera tracking towards depth camera - Google Patents

A kind of camera tracking towards depth camera Download PDF

Info

Publication number
CN107527366A
CN107527366A CN201710727980.9A CN201710727980A CN107527366A CN 107527366 A CN107527366 A CN 107527366A CN 201710727980 A CN201710727980 A CN 201710727980A CN 107527366 A CN107527366 A CN 107527366A
Authority
CN
China
Prior art keywords
mrow
camera
msub
depth
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710727980.9A
Other languages
Chinese (zh)
Other versions
CN107527366B (en
Inventor
李朔
杨高峰
李骊
周晓军
王行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Zhuxi Technology Co ltd
Original Assignee
Shanghai Wisdom Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wisdom Electronic Technology Co Ltd filed Critical Shanghai Wisdom Electronic Technology Co Ltd
Priority to CN201710727980.9A priority Critical patent/CN107527366B/en
Publication of CN107527366A publication Critical patent/CN107527366A/en
Application granted granted Critical
Publication of CN107527366B publication Critical patent/CN107527366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of camera tracking towards depth camera, it is whether obvious according to the characteristic point of gray level image, select the camera tracing mode of view-based access control model information and the camera tracing mode based on depth information, structure is on photo measure error and the joint objective function of depth value false drop in the camera tracing mode of view-based access control model information, and component is on there is the object function of symbolic measurement model in the camera tracing mode based on depth information.The present invention is switched by double mode, is enhanced the applicability of system, is improved the stability of system.

Description

A kind of camera tracking towards depth camera
Technical field
The invention belongs to intelligent perception technology field, a kind of more particularly to camera tracking towards depth camera.
Background technology
The motion of camera is tracked to construct visual odometry using depth camera, is Visual SLAM The method to be become more and more popular in (Simultaneous Localization and Mapping) technology.Accurate camera pose Estimation, is the basis of environmental modeling, and research object important in Visual SLAM.Tracking for camera motion, typically Conventional method is all extraction and matches discrete sparse visual signature, then recycles re-projection error to construct target letter Number, then the minimum value of object function is solved so as to estimate the pose of camera.The validity of such method depends on the degree of accuracy The key point of characteristics of image and description;During feature extraction, larger computing resource can be expended.
Chinese patent application (publication number 106556412A), which discloses, " considers the RGB- of surface constraints under a kind of indoor environment D visual odometries method ".This method goes out spatial point cloud using RGB-D color depth information structurings, then extracts the ORB of cromogram Invariable rotary visual signature, so as to construct the point set of enhancing.So assuming that on the premise of camera constant speed motion model, pass through The plane information on ground, and the height of camera and pitching angle information, can pre-estimate out the possibility of plane in the next frame Position, in this, as initial value, to match the point set of alignment enhancing, it is possible to relatively accurately estimate that the relative pose of camera becomes Change.In the case of feature angle point included in visual signature missing or visual information is fewer, this method is just easier Limited to.
Chinese patent application (application number:201610219378) a kind of " vision of fusion RGB and Depth information is disclosed Odometer implementation method ".This method extracts characteristic point first, and rough is carried out by stochastical sampling uniformity (RANSAC) Match somebody with somebody.Then, it is then down-sampled by being carried out to a cloud, and the matching that iterative closest point approach (ICP) progress is fine.Due to using Visual signature point, again such that for not obvious enough the situation of characteristic point, this method has very big limitation.
Chinese patent application (publication number:105045263A) disclose a kind of " robot self-localization side based on Kinect Method ".The method matched similar to planar laser radar with environmental model, this method extract the terrain surface specifications in a cloud first, Then three-dimensional point cloud is projected on two-dimentional ground, then the projection on ground is matched with environment Raster Data Model, so as to estimate Calculate the interframe relative motion of camera.Because construction has got well the planar grid map of environment as reference to be matched in advance, count It is relatively accurate to calculate result.But due to dependent on existing environmental model so that the scope of application compares limitation, is not suitable with The occasion of circumstances not known model carries out online motion tracking.
As fully visible, the method for view-based access control model characteristic point compares dependent on the feature-rich point information in environment so that suitable It is severely limited with scope.
The content of the invention
In order to solve the technical problem that above-mentioned background technology proposes, the present invention is intended to provide a kind of phase towards depth camera Machine tracking, different processing modes is selected according to the change of the shade of gray of image, enhances applicability.
In order to realize above-mentioned technical purpose, the technical scheme is that:
A kind of camera tracking towards depth camera, comprise the following steps:
(1) pose of depth camera is initialized;
(2) coloured image that depth camera obtains is converted into gray level image;
(3) extract pixel of the change of shade of gray in gray level image more than given threshold a, using these pixels as Shade of gray changes obvious pixel;
(4) if shade of gray changes obvious pixel number and is more than given threshold b, for the obvious picture of shade of gray Vegetarian refreshments, photo measure error function and depth value error function are constructed, and joint mesh is constructed using two norms of the two functions Scalar functions, the change of optimization joint objective function estimation camera pose, obtain the camera pose at current time;If shade of gray becomes Change obvious pixel number and be not more than given threshold b, then into step (5);
(5) symbolic measurement model has been constructed using the depth map data at current time, so as to quantify space body The distance of plain grid and the body surface perceived, by there is symbolic measurement Construction of A Model object function, by optimizing mesh Scalar functions obtain the camera pose at current time.
Further, in step (4), the photo measure error function is shown below:
In above formula, E1(x) photo measure error function is represented, x represents the pixel coordinate on imaging plane, In(x) n-th is represented The gray value of pixel in two field picture, π () represent re-projection function, π-1() represents the inverse function of re-projection, Tn,n-1Represent The increment change of camera pose, Tn-1The camera pose of last moment is represented, i represents all obvious pixels of shade of gray Index.
Further, in step (4), the depth value error function is shown below:
Ez(x)=[Tn,n-1·Tn-1·π-1(x)]z-Zn(π(Tn,n-1·Tn-1·π-1(x)))
In above formula, Ez(x) depth value error function, Z are representedn() is represented associated by the obvious pixel of shade of gray The depth value of spatial point, []zExpression takes the component in z directions.
Further, in step (4), the joint objective function is shown below:
In above formula, E (x) represents joint objective function, and subscript T represents transposition;
By solving E (x) minimum value, T is obtainedn,n-1, further according to Tn,n-1Obtain the camera pose T at current timen:Tn= Tn,n-1·Tn-1
Further, in step (5), described have the symbolic measurement model to be, the three-dimensional table being perceived for object Face, the numerical value for having symbolic measurement are zero;In the front in the outside on the perception surface, i.e. object, there is the symbolic measurement to be On the occasion of, and its numerical values recited is directlyed proportional to the point to perceiving the distance on surface;In the inner side on the perception surface, i.e., after object Side, it is negative value to have symbolic measurement, and its numerical values recited is directlyed proportional to the point to perceiving the distance on surface.
Further, step (5) comprise the following steps that:
(501) it is built with symbolic measurement model using current depth diagram data;
(502) it is relative between two frames before and after being obtained by inertial navigation sensor when next frame depth map data arrives Pose changes, and the predicted value of current time camera pose is calculated according to following formula:
ETn=ETn,n-1·Tn-1
In above formula, ETnFor the predicted value of current time camera pose, ETn,n-1Relative pose between front and rear two frame becomes Change, Tn-1For last moment camera pose;
(503) coordinate value of the spatial point for being perceived present frame in camera coordinates system is transformed into world coordinate system:
Pw=RPc+t
In above formula, PwThe coordinate value for being spatial point in world coordinate system, PcThe coordinate for being spatial point in camera coordinates system Value, R is spin matrix, and t is the predicted value ET of translation vector, R and t according to current time camera posenObtain,
(504) object function is constructed:
In above formula, E is object function, SDF2(Pw) represent point PwThere is square of symbolic measurement at place, and i represents present frame The index of all pixels point in image;
(505) by ETnAs the initial value for solving object function, adjusted near initial value, obtain the minimum value of object function, Then solution corresponding to the minimum value is the camera pose T at current timen
The beneficial effect brought using above-mentioned technical proposal:
The present invention need not extract the feature of coloured image, on the contrary, being changed greatly only for shade of gray in gray-scale map Pixel handled, so greatly reduce amount of calculation, for the unconspicuous situation of shade of gray, be switched to direct use Depth map carries out the pattern of " point cloud and Model Matching ", therefore unrestricted in the situation for having illumination, even in no light situation Under, the method based on depth map can still play a role.
Brief description of the drawings
Fig. 1 is flow chart of the method for the present invention.
Embodiment
Below with reference to accompanying drawing, technical scheme is described in detail.
A kind of camera tracking towards depth camera, as shown in figure 1, comprising the following steps that.
Step 1:Initialize the pose of depth camera.
Step 2:The coloured image that depth camera obtains is converted into gray level image.
Step 3:Pixel of the shade of gray change more than given threshold a in gray level image is extracted, by these pixels Change obvious pixel as shade of gray.
Step 4:If shade of gray changes obvious pixel number more than given threshold b, obvious for shade of gray Pixel, construct photo measure error function and depth value error function, and connection is constructed using two norms of the two functions Object function is closed, the change of optimization joint objective function estimation camera pose, obtains the camera pose at current time;If gray scale is terraced Degree changes obvious pixel number and is not more than given threshold b (such as imaging circumstances are dark or imaging object is solid color regions), Then enter step 5.
The photo measure error function is shown below:
In above formula, E1(x) photo measure error function is represented, x represents the pixel coordinate on imaging plane, In(x) n-th is represented The gray value of pixel in two field picture, π () represent re-projection function, π-1() represents the inverse function of re-projection, Tn,n-1Represent The increment change of camera pose, Tn-1The camera pose of last moment is represented, i represents all obvious pixels of shade of gray Index.
For spatial point [xc,yc,zc]TAnd the pixel [u, v] on corresponding imaging planeT, the focal length of camera [fx,fy]T, the photocentre [c of imaging planex,cy]T, then re-projection function is as follows:
The inverse function of re-projection function is as follows:
In above formula, d is the depth value of pixel, and s is zoom factor.
The depth value error function is shown below:
Ez(x)=[Tn,n-1·Tn-1·π-1(x)]z-Zn(π(Tn,n-1·Tn-1·π-1(x)))
In above formula, Ez(x) depth value error function, Z are representedn() is represented associated by the obvious pixel of shade of gray The depth value of spatial point, []zExpression takes the component in z directions.
The joint objective function is shown below:
In above formula, E (x) represents joint objective function, and subscript T represents transposition.
By solving E (x) minimum value, T is obtainedn,n-1, further according to Tn,n-1Obtain the camera pose T at current timen:Tn= Tn,n-1·Tn-1
Step 5:Symbolic measurement model is constructed using the depth map data at current time, so as to quantify space The distance of voxel grid and the body surface perceived, by there is symbolic measurement Construction of A Model object function, passes through optimization Object function obtains the camera pose at current time.
It is described that to have symbolic measurement (Signed Distance Function, SDF) model be to be perceived for object Three-dimensional surface, the numerical value for having symbolic measurement is zero;In the front in the outside on the perception surface, i.e. object, have symbol away from It is on the occasion of and its numerical values recited and the point are directlyed proportional to the distance on perception surface from function;In the inner side on the perception surface, i.e. thing The rear of body, it is negative value to have symbolic measurement, and its numerical values recited is directlyed proportional to the point to perceiving the distance on surface.
Step 5 comprises the following steps that:
(1) it is built with symbolic measurement model using current depth diagram data;
(2) when next frame depth map data arrives, the relative position before and after being obtained by inertial navigation sensor between two frames Appearance changes, and the predicted value of current time camera pose is calculated according to following formula:
ETn=ETn,n-1·Tn-1
In above formula, ETnFor the predicted value of current time camera pose, ETn,n-1Relative pose between front and rear two frame becomes Change, Tn-1For last moment camera pose;
(3) coordinate value of the spatial point for being perceived present frame in camera coordinates system is transformed into world coordinate system:
Pw=RPc+t
In above formula, PwThe coordinate value for being spatial point in world coordinate system, PcThe coordinate value for being spatial point in camera coordinates system, R is spin matrix, and t is the predicted value ET of translation vector, R and t according to current time camera posenObtain,
(4) object function is constructed:
In above formula, E is object function, SDF2(Pw) represent point PwThere is square of symbolic measurement at place, and i represents present frame The index of all pixels point in image;
(5) by ETnAs the initial value for solving object function, adjusted near initial value, obtain the minimum value of object function, then Solution corresponding to the minimum value is the camera pose T at current timen
The technological thought of embodiment only to illustrate the invention, it is impossible to protection scope of the present invention is limited with this, it is every according to Technological thought proposed by the present invention, any change done on the basis of technical scheme, each falls within the scope of the present invention.

Claims (6)

1. a kind of camera tracking towards depth camera, it is characterised in that comprise the following steps:
(1) pose of depth camera is initialized;
(2) coloured image that depth camera obtains is converted into gray level image;
(3) pixel of the shade of gray change more than given threshold a in gray level image is extracted, using these pixels as gray scale The obvious pixel of graded;
(4) if shade of gray changes obvious pixel number and is more than given threshold b, for the obvious pixel of shade of gray Point, photo measure error function and depth value error function are constructed, and joint objective is constructed using two norms of the two functions Function, the change of optimization joint objective function estimation camera pose, obtains the camera pose at current time;If shade of gray changes Obvious pixel number is not more than given threshold b, then into step (5);
(5) symbolic measurement model has been constructed using the depth map data at current time, so as to quantify spatial voxel net The distance of lattice and the body surface perceived, by there is symbolic measurement Construction of A Model object function, passes through optimization aim letter Number obtains the camera pose at current time.
2. according to claim 1 towards the camera tracking of depth camera, it is characterised in that:It is described in step (4) Photo measure error function is shown below:
<mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msup> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>&amp;pi;</mi> <mo>(</mo> <mrow> <msub> <mi>T</mi> <mrow> <mi>n</mi> <mo>,</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>T</mi> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msup> <mi>&amp;pi;</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow>
In above formula, E1(x) photo measure error function is represented, x represents the pixel coordinate on imaging plane, In(x) n-th frame figure is represented The gray value of pixel as in, π () represent re-projection function, π-1() represents the inverse function of re-projection, Tn,n-1Represent camera The increment change of pose, Tn-1The camera pose of last moment is represented, i represents the index of all obvious pixels of shade of gray.
3. according to claim 2 towards the camera tracking of depth camera, it is characterised in that:It is described in step (4) Depth value error function is shown below:
Ez(x)=[Tn,n-1·Tn-1·π-1(x)]z-Zn(π(Tn,n-1·Tn-1·π-1(x)))
In above formula, Ez(x) depth value error function, Z are representedn() represents the space associated by the obvious pixel of shade of gray The depth value of point, []zExpression takes the component in z directions.
4. according to claim 2 towards the camera tracking of depth camera, it is characterised in that:It is described in step (4) Joint objective function is shown below:
<mrow> <mi>E</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msubsup> <mi>E</mi> <mn>1</mn> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>E</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>E</mi> <mi>z</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>E</mi> <mi>z</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow>
In above formula, E (x) represents joint objective function, and subscript T represents transposition;
By solving E (x) minimum value, T is obtainedn,n-1, further according to Tn,n-1Obtain the camera pose T at current timen:Tn= Tn,n-1·Tn-1
5. according to claim 1 towards the camera tracking of depth camera, it is characterised in that:It is described in step (5) It is that the three-dimensional surface being perceived for object, the numerical value for having symbolic measurement is zero to have symbolic measurement model;In the sense Know the front in the outside on surface, i.e. object, it is on the occasion of and its numerical values recited and the point are with perceiving surface to have symbolic measurement Apart from directly proportional;At the rear of the inner side on the perception surface, i.e. object, it is negative value to have symbolic measurement, and its numerical values recited Directlyed proportional to the point to perceiving the distance on surface.
6. according to claim 5 towards the camera tracking of depth camera, it is characterised in that:The specific step of step (5) It is rapid as follows:
(501) it is built with symbolic measurement model using current depth diagram data;
(502) when next frame depth map data arrives, the relative pose before and after being obtained by inertial navigation sensor between two frames Change, the predicted value of current time camera pose is calculated according to following formula:
ETn=ETn,n-1·Tn-1
In above formula, ETnFor the predicted value of current time camera pose, ETn,n-1Relative pose change between front and rear two frame, Tn-1For last moment camera pose;
(503) coordinate value of the spatial point for being perceived present frame in camera coordinates system is transformed into world coordinate system:
Pw=RPc+t
In above formula, PwThe coordinate value for being spatial point in world coordinate system, PcThe coordinate value for being spatial point in camera coordinates system, R For spin matrix, t is the predicted value ET of translation vector, R and t according to current time camera posenObtain,
(504) object function is constructed:
<mrow> <mi>E</mi> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <msup> <mi>SDF</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>w</mi> </msub> <mo>)</mo> </mrow> </mrow>
In above formula, E is object function, SDF2(Pw) represent point PwThere is square of symbolic measurement at place, and i represents current frame image The index of middle all pixels point;
(505) by ETnAs the initial value for solving object function, adjusted near initial value, obtain the minimum value of object function, then should Solution corresponding to minimum value is the camera pose T at current timen
CN201710727980.9A 2017-08-23 2017-08-23 Camera tracking method for depth camera Active CN107527366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710727980.9A CN107527366B (en) 2017-08-23 2017-08-23 Camera tracking method for depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710727980.9A CN107527366B (en) 2017-08-23 2017-08-23 Camera tracking method for depth camera

Publications (2)

Publication Number Publication Date
CN107527366A true CN107527366A (en) 2017-12-29
CN107527366B CN107527366B (en) 2020-04-10

Family

ID=60681959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710727980.9A Active CN107527366B (en) 2017-08-23 2017-08-23 Camera tracking method for depth camera

Country Status (1)

Country Link
CN (1) CN107527366B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN109947886A (en) * 2019-03-19 2019-06-28 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and storage medium
CN110059651A (en) * 2019-04-24 2019-07-26 北京计算机技术及应用研究所 A kind of camera real-time tracking register method
CN110375765A (en) * 2019-06-28 2019-10-25 上海交通大学 Visual odometry method, system and storage medium based on direct method
CN110657803A (en) * 2018-06-28 2020-01-07 深圳市优必选科技有限公司 Robot positioning method, device and storage device
CN110926334A (en) * 2019-11-29 2020-03-27 深圳市商汤科技有限公司 Measuring method, measuring device, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216957A (en) * 2008-10-09 2011-10-12 埃西斯创新有限公司 Visual tracking of objects in images, and segmentation of images
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105809687A (en) * 2016-03-08 2016-07-27 清华大学 Monocular vision ranging method based on edge point information in image
CN105825520A (en) * 2015-01-08 2016-08-03 北京雷动云合智能技术有限公司 Monocular SLAM (Simultaneous Localization and Mapping) method capable of creating large-scale map
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216957A (en) * 2008-10-09 2011-10-12 埃西斯创新有限公司 Visual tracking of objects in images, and segmentation of images
CN105825520A (en) * 2015-01-08 2016-08-03 北京雷动云合智能技术有限公司 Monocular SLAM (Simultaneous Localization and Mapping) method capable of creating large-scale map
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105809687A (en) * 2016-03-08 2016-07-27 清华大学 Monocular vision ranging method based on edge point information in image
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐巍,等: ""灰色关联分析法在双目视觉测量***误差分析中的应用"", 《光学精密工程》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN110657803A (en) * 2018-06-28 2020-01-07 深圳市优必选科技有限公司 Robot positioning method, device and storage device
CN110657803B (en) * 2018-06-28 2021-10-29 深圳市优必选科技有限公司 Robot positioning method, device and storage device
CN109947886A (en) * 2019-03-19 2019-06-28 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and storage medium
CN109947886B (en) * 2019-03-19 2023-01-10 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110059651A (en) * 2019-04-24 2019-07-26 北京计算机技术及应用研究所 A kind of camera real-time tracking register method
CN110059651B (en) * 2019-04-24 2021-07-02 北京计算机技术及应用研究所 Real-time tracking and registering method for camera
CN110375765A (en) * 2019-06-28 2019-10-25 上海交通大学 Visual odometry method, system and storage medium based on direct method
CN110375765B (en) * 2019-06-28 2021-04-13 上海交通大学 Visual odometer method, system and storage medium based on direct method
CN110926334A (en) * 2019-11-29 2020-03-27 深圳市商汤科技有限公司 Measuring method, measuring device, electronic device and storage medium

Also Published As

Publication number Publication date
CN107527366B (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN107527366A (en) A kind of camera tracking towards depth camera
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
CN105809687B (en) A kind of monocular vision ranging method based on point information in edge in image
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN104484668B (en) A kind of contour of building line drawing method of the how overlapping remote sensing image of unmanned plane
CN103400409B (en) A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation
CN104299244B (en) Obstacle detection method and device based on monocular camera
CN104704384B (en) Specifically for the image processing method of the positioning of the view-based access control model of device
CN105956539B (en) A kind of Human Height measurement method of application background modeling and Binocular Vision Principle
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
CN107292965A (en) A kind of mutual occlusion processing method based on depth image data stream
CN105225230A (en) A kind of method and device identifying foreground target object
CN106940704A (en) A kind of localization method and device based on grating map
CN107833270A (en) Real-time object dimensional method for reconstructing based on depth camera
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN107679537A (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings
CN105184857A (en) Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN109523589A (en) A kind of design method of more robust visual odometry
CN112270698B (en) Non-rigid geometric registration method based on nearest curved surface
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN111998862B (en) BNN-based dense binocular SLAM method
CN104574387A (en) Image processing method in underwater vision SLAM system
CN107330980A (en) A kind of virtual furnishings arrangement system based on no marks thing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Building 26, Tsinghua Road Science and Technology Park, No. 5708 Jinxiu Avenue, Hefei Economic and Technological Development Zone, Anhui Province, 230000

Patentee after: Hefei Zhuxi Technology Co.,Ltd.

Address before: 201301 Shanghai Pudong New Area China (Shanghai) Pilot Free Trade Zone 5709 Shenjiang Road, Building 1 607, No. 26 Qiuyue Road

Patentee before: SHANGHAI SHIZHI ELECTRONIC TECHNOLOGY CO.,LTD.

CP03 Change of name, title or address
PP01 Preservation of patent right

Effective date of registration: 20230824

Granted publication date: 20200410

PP01 Preservation of patent right