CN107292925A - Based on Kinect depth camera measuring methods - Google Patents

Based on Kinect depth camera measuring methods Download PDF

Info

Publication number
CN107292925A
CN107292925A CN201710418540.5A CN201710418540A CN107292925A CN 107292925 A CN107292925 A CN 107292925A CN 201710418540 A CN201710418540 A CN 201710418540A CN 107292925 A CN107292925 A CN 107292925A
Authority
CN
China
Prior art keywords
point
data
cloud
kinect
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710418540.5A
Other languages
Chinese (zh)
Inventor
王宏
刘路平
张东来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201710418540.5A priority Critical patent/CN107292925A/en
Publication of CN107292925A publication Critical patent/CN107292925A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/536Depth or shape recovery from perspective effects, e.g. by using vanishing points
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Kinect depth camera measuring methods are based on the invention provides one kind, are comprised the following steps:S1, depth image data conversion;S2, the pose information for calculating global camera;S3, will be the cube that represents the scenery in the range of camera view from the depth image data fusion of the camera of known poses information generation;S4, from sensor viewpoint position light projection is carried out to re-establishing cube, the dot matrix cloud of reconstruction produces the three-dimensional reconstruction cube rendered;The distance between S5, calculating measurement point.The beneficial effects of the invention are as follows:Possesses good dynamic property, it is not necessary to the computer equipment of high-performance configuration, it is possible to achieve to the online real-time reconstruction of object, operation is simple, with low cost, and measurement accuracy is high, ageing height, flexibility is strong, and the digitlization that can meet the target scene in domestic consumer's daily life reproduces.

Description

Based on Kinect depth camera measuring methods
Technical field
Kinect depth camera measuring methods are based on the present invention relates to measuring method, more particularly to one kind.
Background technology
At present, to possess timeliness stronger for engineering survey, and operating environment is complicated, and the other professional knowledges being related to are more, to surveying The characteristics of professional knowledge and technical capability of amount personnel requires higher, such as foundation ditch dynamometry amount is typically to coordinate precipitation and excavated Journey, with distinct timeliness, measurement result is often dynamic change, and the measurement result before a few houres is likely to lose meaning Justice.Therefore, reduction object model, measurement object dimensional information the research heat as the technology how are maximized in the stipulated time Point.
The content of the invention
In order to solve the problems of the prior art, Kinect depth camera measuring methods are based on the invention provides one kind.
Kinect depth camera measuring methods are based on the invention provides one kind, are comprised the following steps:
S1, depth image data conversion;
S2, the pose information for calculating global camera;
S3, by from the depth image data fusion of the camera of known poses information generation to represent camera view scope The cube of interior scenery;
S4, from sensor viewpoint position light projection is carried out to re-establishing cube, the dot matrix cloud of reconstruction produces what is rendered Three-dimensional reconstruction cube;
The distance between S5, calculating measurement point.
As a further improvement on the present invention, the conversion of depth image data includes following sub-step in step S1:
S11, obtain original depth and color data, by the original depth frame data obtained in Kinect be converted to using rice as The floating data of unit;
S12, floating data is optimized, by obtaining the coordinate information of camera, floating data is converted to and Kinect cameras are towards consistent cloud data;
S13, thresholding and noise reduction process are carried out to cloud data, surface condition is by using AlignPointClouds letters Number is obtained.
As a further improvement on the present invention, step S2 includes following sub-step:
S21, using NuiFusionAlignPointClouds methods by from reconstructed object calculate come cloud data with The cloud data obtained from Kinect depth image datas carries out registration;
S22, handled re-establishing cube using AlignDepthToReconstruction methods, obtain and follow the trail of knot Really, if the tracking in scene is interrupted, then the position of camera and last camera position are alignd, proceeded Follow the trail of.
As a further improvement on the present invention, step S3 includes following sub-step:
S31, three-dimensional point cloud registration, find out the spatial transform relation of two groups of cloud data collection, i.e. rotation and translation vector, so This two groups of cloud datas are transformed under same coordinate system afterwards so that intersection area between the two is overlapping;
S32, frame by frame to depth data merge, while parallel operation smoothing algorithm carry out denoising.
As a further improvement on the present invention, step S31 includes following sub-step:
S311, calculating closest approach, i.e., for each point in set U, all find out pair nearest away from the point in set P Ying Dian, if the new point set being made up of in set P these corresponding points is Q={ qi, i=0,1,2 ..., n };
S312, the position of centre of gravity coordinate for calculating two point sets Q and U, and carry out the new point set of point set centralization generation;By new Point set calculate positive definite matrix N, and calculate positive definite matrix N eigenvalue of maximum and its maximal eigenvector;
S313, be equivalent to due to maximal eigenvector residual sum of squares (RSS) it is minimum when rotation quaternary number, quaternary number is changed For spin matrix R;After spin matrix R is determined, because translation vector T is the center of gravity difference of two point sets, pass through two seats Focus point and spin matrix in mark system are determined;Wherein R is 3 × 3 spin matrix, and T is 3 × 1 translation matrix;
S314, calculating coordinate change, i.e., for set U, carried out coordinate transform, obtained new with registration transformation matrix R, T Point set U1, i.e. U1=RU+T;
Root-mean-square error between S315, calculating U1 and Q, such as less than default limiting value ε then terminates, otherwise, with point set U1 replaces U, repeats the above steps.
As a further improvement on the present invention, step S4 includes following sub-step:
S41, micro- polygon generation point cloud from direct illumination;
S42, n-1 bounce-back illumination for calculating point cloud;
S43, according to a cloud computing and render indirect light shine.
As a further improvement on the present invention, step S5 includes following sub-step;
S51, from generation point cloud in read measurement point point cloud information;
S52, according to measurement request, information is included according to a cloud and carries out length and curvature estimation.
The beneficial effects of the invention are as follows:Possess good dynamic property, it is not necessary to the computer equipment of high-performance configuration, can To realize the online real-time reconstruction to object, operation is simple, with low cost, and measurement accuracy is high, ageing height, and flexibility is strong, can Reproduced with the digitlization for meeting the target scene in domestic consumer's daily life.
Brief description of the drawings
Fig. 1 is a kind of measurement block diagram based on Kinect depth camera measuring methods of the present invention.
Fig. 2 is a kind of non-linear camera perspective projection model based on Kinect depth camera measuring methods of the present invention Figure.
Fig. 3 is that a kind of Kinect V2 based on Kinect depth camera measuring methods of the present invention obtain color data from SDK Flow chart.
Fig. 4 is that a kind of Kinect V2 based on Kinect depth camera measuring methods of the present invention obtain depth data from SDK Flow chart.
Fig. 5 is a kind of ICP algorithm basic flow sheet based on Kinect depth camera measuring methods of the present invention.
Embodiment
The invention will be further described for explanation and embodiment below in conjunction with the accompanying drawings.
As shown in Figures 1 to 5, it is a kind of to be based on Kinect depth camera measuring methods, comprise the following steps:
(1), initial data is obtained and spatial scaling:Original depth and color data are obtained using KINECT V2 equipment, SDK is converted to the original depth frame data obtained in Kinect the floating data in units of rice;
(2), spatial view is changed:Depth data is optimized, by obtaining the coordinate information of camera, these floated Point data is converted to Kinect cameras towards consistent cloud data;
(3), point cloud pretreatment:Cloud data thresholding and noise reduction process, surface condition by using AlignPointClouds functions are obtained;
(4), point cloud registering:Using NuiFusionAlignPointClouds methods by from reconstructed object calculate come Point cloud is registering with the point cloud progress obtained from Kinect depth image datas;
(5), point cloud is followed the trail of:When being handled using AlignDepthToReconstruction methods re-establishing cube The tracking result of higher precision is resulted in, if the tracking in scene is interrupted, then need the position of camera and upper Camera position alignment once can just proceed to follow the trail of;
(6), point cloud fusion:This two groups of cloud datas are transformed under same coordinate system so that common factor between the two Region is overlapping, and depth data is merged frame by frame, while operation smoothing algorithm has carried out denoising parallel, is specially:Three-dimensional point cloud is matched somebody with somebody Standard, finds out the spatial transform relation of two groups of cloud data collection, i.e. rotation and translation vector, then converts this two groups of cloud datas To under same coordinate system so that intersection area between the two is overlapping;Depth data is merged frame by frame, while parallel operation is flat Sliding algorithm has carried out denoising, wherein, three-dimensional point cloud registration process is as follows:
A1 closest approach), is calculated, i.e., for each point in set U, pair nearest away from the point is all found out in set P Ying Dian, if the new point set being made up of in set P these corresponding points is Q={ qi, i=0,1,2 ..., n };
A2 two point sets Q and U position of centre of gravity coordinate), are calculated, and carries out point set centralization and generates new point set;By new Point set calculate positive definite matrix N, and calculate N eigenvalue of maximum and its maximal eigenvector;
A3 rotation quaternary number during residual sum of squares (RSS) minimum), is equivalent to due to maximal eigenvector, quaternary number is converted to Spin matrix R;After spin matrix R is determined, the center of gravity difference for being only two point sets by translation vector t can pass through two Focus point and spin matrix in individual coordinate system are determined;Wherein R is 3 × 3 spin matrix, and T is 3 × 1 translation matrix;
A4), calculating coordinate change, i.e., for set U, carried out coordinate transform, obtained new point with registration transformation matrix R, T Collect U1, i.e. U1=RU+T;
A5 the root-mean-square error between U1 and Q), is calculated, such as less than default limiting value ε then terminates, otherwise, with point set U1 replaces U, repeats the above steps;
(7), point cloud is rendered:From micro- polygon generation point cloud of direct illumination, n-1 bounce-back illumination of point cloud is calculated;(such as Fruit only need to once rebound, then the step can be omitted), according to a cloud computing and render indirect light shine;
(8), point cloud measurement:Point cloud information, measurement length, curvature, angle information are read according to measurement position.
One kind that the present invention is provided is based on Kinect depth camera measuring methods, is mainly based upon ICP's
Improvement comprises the following steps:
(1), point selection:The cloud data produced according to depth transducer is, it is necessary to which selected section cloud data is as sample For calculating the optimum translation between two groups of cloud datas, then in this conversion of whole data integrated test.According to source point cloud Data set, has been demonstrated a portion point more suitable for making sample point, because these shops are easier identification and matched.For RGB-D data, it is proposed that carry out sample point selection using SIFT (Scale invariant features transform) feature and be proved to very effective.It is right In having reeded planar target object, sample point sampling is carried out it is ensured that the angle of sampled point normal vector is widely distributed. For which kind of method of sampling should be used to be selected according to the problem of each specific;
(2), Point matching:A step of most critical is the matching problem of corresponding points in ICP algorithm.Main method is sought to Closest point is found out in another point converges or finds out the common factor of source data point and target surface (grid point cloud is matched).It is right In the sampling based on SIFT feature, it again may be by SIFT feature and be ranked up, utilize brute-force search or the mode of k-d tree Accelerate to search immediate match point, and require that normal vector inner product between the two is more than a threshold value set;
(3), point is to weighting:Because some points are better than other points to matching degree, with some wiser modes to matching The good point of degree is to assigning weights, so as to improve the outcome quality of transition matrix.A kind of method is that distance is smaller between corresponding points The weights of distribution are bigger.Another method is that the data for being directed to RGB-D consider the value of color of the point, or considers its SIFT feature Point distance, low distance than rise from weights it is bigger.Other methods of weighting also include:Constant weight, normal vector get over one Cause the methods such as bigger, the Reliability Distribution weights according to depth transducer of weights.Finally the estimated weight available for noise is matched;
(4), point is to removing:The outlier that influence error is minimized mainly is excluded, these outliers include coming from biography Sensor noise or the nonoverlapping region of depth image for coming from two successive frames.The step influences very for the precision of registration Greatly, but can not improve registration convergence rate.The main method for handling this problem is exactly to remove the number except on point cloud edge Strong point, because these most probables are in non-overlapped region.Also other method is adjusted the distance more than given threshold as removed point Point pair;
(5), error metrics and minimum.When data point be chosen and match, put it is overdue to weighted sum to removal after, two Individual point converges conjunction and needs to be expressed with suitable error metrics, it is necessary to be minimized.Most direct mode is exactly to consider each point To squared-distance and.It is assumed that two input points are converged:WithTarget is exactly to find out one Vector T ∈ RnSo that error function φ (P+T, Q) is minimized.In the sextuple free degree (rotation and translation).Represent mean square distance Error function is as follows:
In above formulaIt is PdstTranslation is thick to arrive PsrcClosest point function.Now crucial the problem of is exactly t's Actual value can influence the pairing result of output.Solving this problem has two methods, and one is " point-to-point " method, that is, calculates at 2 points Space length, solved using linear operator SVD (singular value decomposition) or orthogonal matrix method.Another is that " point is arrived Plane " method, that is, calculate normal vector distance of the point between, using nonlinear operator Levenberg-Marquardt (Lai Wen Bei Ge-Marquart) solve.
Analysis of cases
Based on MFC application software platforms, following experiment is completed using the present invention one industrial robot of measurement.
By exporting STL figures, dimension of object is measured by SOLIDWORKS survey tools as follows:Rectangle object is 0.31864m*0.50751m*0.15262m, actual object length be 0.321m*0.501m*0.163m, error 0.02m it It is interior.
From the experimental results, the present invention possesses measurement accuracy height, and ageing height, flexibility is strong, characteristic with low cost.
One kind that the present invention is provided is based on Kinect depth camera measuring methods, is broadly divided into two stages, and early stage is carried out Rough registration estimation, the later stage carries out fine registration estimation again.It is by extracting characteristic point and calculating point feature histogram, borrow first Help k-d tree to carry out correspondence point estimation, removed using random sampling consistent method without match point, obtain a rough registration alignment, so After be to carry out fine registration using closest algorithm of iteration, utilize nonlinear operator solution room transition matrix.In fusion rank Section, the point cloud after registration is merged and removes impurity point and exterior point, take polygonal mesh to represent by bin expression And mapping technology, visual real-world object threedimensional model result is obtained, finally, point cloud letter is read according to measurement position Breath, measurement length, curvature, angle information.Therefore, the present invention is improved to kinect reconstruction models, and the method after improvement can To realize the online real-time reconstruction to object, operation is simple, with low cost, and measurement accuracy is high, and ageing height, flexibility is strong.
Above content is to combine specific preferred embodiment further description made for the present invention, it is impossible to assert The specific implementation of the present invention is confined to these explanations.For general technical staff of the technical field of the invention, On the premise of not departing from present inventive concept, some simple deduction or replace can also be made, should all be considered as belonging to the present invention's Protection domain.

Claims (8)

1. one kind is based on Kinect depth camera measuring methods, it is characterised in that comprise the following steps:
S1, depth image data conversion;
S2, the pose information for calculating global camera;
S3, will be to represent in the range of camera view from the depth image data fusion of the camera of known poses information generation The cube of scenery;
S4, from sensor viewpoint position light projection is carried out to re-establishing cube, the dot matrix cloud of reconstruction produces the three-dimensional rendered Rebuild cube;
The distance between S5, calculating measurement point.
2. according to claim 1 be based on Kinect depth camera measuring methods, it is characterised in that:Depth shadow in step S1 As the conversion of data includes following sub-step:
S11, initial data are obtained and spatial scaling:Original depth and color data are obtained, by the original depth obtained in Kinect Degree frame data are converted to the floating data in units of rice;
S12, spatial view conversion:Floating data is optimized, by obtaining the coordinate information of camera, floating data turned It is changed to Kinect cameras towards consistent cloud data;
S13, point cloud pretreatment:Carry out thresholding and noise reduction process to cloud data, surface condition by using AlignPointClouds functions are obtained.
3. according to claim 2 be based on Kinect depth camera measuring methods, it is characterised in that:Step S11 is utilization KINECT V2 equipment obtains original depth and color data, is changed the original depth frame data obtained in Kinect using SDK For the floating data in units of rice.
4. according to claim 1 be based on Kinect depth camera measuring methods, it is characterised in that:
Step S2 includes following sub-step:
S21, point cloud registering:Using NuiFusionAlignPointClouds methods by from reconstructed object calculate come point cloud Data are registering with the cloud data progress obtained from Kinect depth image datas;
S22, point cloud are followed the trail of:Handled re-establishing cube, obtained using AlignDepthToReconstruction methods Result is followed the trail of, if the tracking in scene is interrupted, then the position of camera and last camera position are alignd, after It is continuous to be tracked.
5. according to claim 1 be based on Kinect depth camera measuring methods, it is characterised in that:Step S3 includes following Sub-step:
S31, three-dimensional point cloud registration, find out the spatial transform relation of two groups of cloud data collection, i.e. rotation and translation vector, then will This two groups of cloud datas are transformed under same coordinate system so that intersection area between the two is overlapping;
S32, frame by frame to depth data merge, while parallel operation smoothing algorithm carry out denoising.
6. according to claim 5 be based on Kinect depth camera measuring methods, it is characterised in that:Step S31 include with Lower sub-step:
S311, calculating closest approach, i.e., for each point in set U, all find out the correspondence nearest away from the point in set P Point, if the new point set being made up of in set P these corresponding points is Q={ qi, i=0,1,2 ..., n };
S312, the position of centre of gravity coordinate for calculating two point sets Q and U, and carry out the new point set of point set centralization generation;By new point Collection calculates positive definite matrix N, and calculates positive definite matrix N eigenvalue of maximum and its maximal eigenvector;
S313, be equivalent to due to maximal eigenvector residual sum of squares (RSS) it is minimum when rotation quaternary number, quaternary number is converted into rotation Torque battle array R;After spin matrix R is determined, because translation vector T is the center of gravity difference of two point sets, pass through two coordinate systems In focus point and spin matrix determine;Wherein R is 3 × 3 spin matrix, and T is 3 × 1 translation matrix;
S314, calculating coordinate change, i.e., for set U, carried out coordinate transform, obtained new point set with registration transformation matrix R, T U1, i.e. U1=RU+T;
Root-mean-square error between S315, calculating U1 and Q, such as less than default limiting value ε is then terminated, otherwise, replaced with point set U1 U is changed, is repeated the above steps.
7. according to claim 1 be based on Kinect depth camera measuring methods, it is characterised in that:Step S4 includes following Sub-step:
S41, micro- polygon generation point cloud from direct illumination;
S42, n-1 bounce-back illumination for calculating point cloud;
S43, according to a cloud computing and render indirect light shine.
8. according to claim 1 be based on Kinect depth camera measuring methods, it is characterised in that:Step S5 includes following Sub-step;
S51, from generation point cloud in read measurement point point cloud information;
S52, according to measurement request, information is included according to a cloud and carries out length and curvature estimation.
CN201710418540.5A 2017-06-06 2017-06-06 Based on Kinect depth camera measuring methods Pending CN107292925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710418540.5A CN107292925A (en) 2017-06-06 2017-06-06 Based on Kinect depth camera measuring methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710418540.5A CN107292925A (en) 2017-06-06 2017-06-06 Based on Kinect depth camera measuring methods

Publications (1)

Publication Number Publication Date
CN107292925A true CN107292925A (en) 2017-10-24

Family

ID=60094993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710418540.5A Pending CN107292925A (en) 2017-06-06 2017-06-06 Based on Kinect depth camera measuring methods

Country Status (1)

Country Link
CN (1) CN107292925A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335325A (en) * 2018-01-30 2018-07-27 上海数迹智能科技有限公司 A kind of cube method for fast measuring based on depth camera data
CN108734772A (en) * 2018-05-18 2018-11-02 宁波古德软件技术有限公司 High accuracy depth image acquisition methods based on Kinect fusion
CN108876862A (en) * 2018-07-13 2018-11-23 北京控制工程研究所 A kind of noncooperative target point cloud position and attitude calculation method
CN109357637A (en) * 2018-12-11 2019-02-19 长治学院 A kind of veneer reeling machine roll bending radius of curvature and method for measuring thickness based on depth camera
CN109657403A (en) * 2019-01-07 2019-04-19 南京工业职业技术学院 A kind of three-dimensional live bridge modeling optimization method based on unmanned plane oblique photograph
CN109974687A (en) * 2017-12-28 2019-07-05 周秦娜 Co-located method, apparatus and system in a kind of multisensor room based on depth camera
CN110047144A (en) * 2019-04-01 2019-07-23 西安电子科技大学 A kind of complete object real-time three-dimensional method for reconstructing based on Kinectv2
CN111292326A (en) * 2019-11-29 2020-06-16 北京华捷艾米科技有限公司 Volume measurement method and system based on 3D depth camera
CN111414798A (en) * 2019-02-03 2020-07-14 沈阳工业大学 Head posture detection method and system based on RGB-D image
CN112197773A (en) * 2020-09-30 2021-01-08 江苏集萃未来城市应用技术研究所有限公司 Visual and laser positioning mapping method based on plane information
CN112382359A (en) * 2020-12-09 2021-02-19 北京柏惠维康科技有限公司 Patient registration method and device, electronic equipment and computer readable medium
CN112665528A (en) * 2020-12-31 2021-04-16 河南卫华重型机械股份有限公司 Correction method for laser scanning three-dimensional imaging
CN112682270A (en) * 2020-12-21 2021-04-20 华能安阳能源有限责任公司 Height measuring method for wind turbine generator

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780619A (en) * 2016-11-25 2017-05-31 青岛大学 A kind of human body dimension measurement method based on Kinect depth cameras

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780619A (en) * 2016-11-25 2017-05-31 青岛大学 A kind of human body dimension measurement method based on Kinect depth cameras

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
万松: "煤矿井下三维点云边缘检测及配准研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
程彦荣: "基于Kinect的三维区域测量技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
魏萍: "点云全局光照渲染方法的优化与扩展", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109974687A (en) * 2017-12-28 2019-07-05 周秦娜 Co-located method, apparatus and system in a kind of multisensor room based on depth camera
CN108335325A (en) * 2018-01-30 2018-07-27 上海数迹智能科技有限公司 A kind of cube method for fast measuring based on depth camera data
CN108734772A (en) * 2018-05-18 2018-11-02 宁波古德软件技术有限公司 High accuracy depth image acquisition methods based on Kinect fusion
CN108876862A (en) * 2018-07-13 2018-11-23 北京控制工程研究所 A kind of noncooperative target point cloud position and attitude calculation method
CN108876862B (en) * 2018-07-13 2021-12-07 北京控制工程研究所 Non-cooperative target point cloud position posture calculation method
CN109357637A (en) * 2018-12-11 2019-02-19 长治学院 A kind of veneer reeling machine roll bending radius of curvature and method for measuring thickness based on depth camera
CN109357637B (en) * 2018-12-11 2021-12-10 长治学院 Method for measuring curvature radius and thickness of plate rolling machine plate rolling based on depth camera
CN109657403A (en) * 2019-01-07 2019-04-19 南京工业职业技术学院 A kind of three-dimensional live bridge modeling optimization method based on unmanned plane oblique photograph
CN111414798A (en) * 2019-02-03 2020-07-14 沈阳工业大学 Head posture detection method and system based on RGB-D image
CN111414798B (en) * 2019-02-03 2022-12-06 沈阳工业大学 Head posture detection method and system based on RGB-D image
CN110047144A (en) * 2019-04-01 2019-07-23 西安电子科技大学 A kind of complete object real-time three-dimensional method for reconstructing based on Kinectv2
CN111292326A (en) * 2019-11-29 2020-06-16 北京华捷艾米科技有限公司 Volume measurement method and system based on 3D depth camera
CN112197773A (en) * 2020-09-30 2021-01-08 江苏集萃未来城市应用技术研究所有限公司 Visual and laser positioning mapping method based on plane information
CN112382359A (en) * 2020-12-09 2021-02-19 北京柏惠维康科技有限公司 Patient registration method and device, electronic equipment and computer readable medium
CN112682270A (en) * 2020-12-21 2021-04-20 华能安阳能源有限责任公司 Height measuring method for wind turbine generator
CN112682270B (en) * 2020-12-21 2023-01-31 华能安阳能源有限责任公司 Height measuring method for wind turbine generator
CN112665528A (en) * 2020-12-31 2021-04-16 河南卫华重型机械股份有限公司 Correction method for laser scanning three-dimensional imaging
CN112665528B (en) * 2020-12-31 2023-10-20 河南卫华重型机械股份有限公司 Correction method for laser scanning three-dimensional imaging

Similar Documents

Publication Publication Date Title
CN107292925A (en) Based on Kinect depth camera measuring methods
CN111795704B (en) Method and device for constructing visual point cloud map
CN108509848B (en) The real-time detection method and system of three-dimension object
CN105913489B (en) A kind of indoor three-dimensional scenic reconstructing method using plane characteristic
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
CN107038717B (en) A method of 3D point cloud registration error is automatically analyzed based on three-dimensional grid
CN106716450B (en) Image-based feature detection using edge vectors
CN107833181B (en) Three-dimensional panoramic image generation method based on zoom stereo vision
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
JP4785880B2 (en) System and method for 3D object recognition
CN105069743B (en) Detector splices the method for real time image registration
CN108052942B (en) Visual image recognition method for aircraft flight attitude
Dall'Asta et al. A comparison of semiglobal and local dense matching algorithms for surface reconstruction
WO2015139574A1 (en) Static object reconstruction method and system
CN113177977B (en) Non-contact three-dimensional human body size measuring method
CN101777129B (en) Image matching method based on feature detection
JP2009169934A (en) System and method for recognizing deformable object
CN107516322B (en) Image object size and rotation estimation calculation method based on log polar space
CN104021547A (en) Three dimensional matching method for lung CT
CN111768447B (en) Monocular camera object pose estimation method and system based on template matching
JP2019091493A (en) System and method for efficiently scoring probes in image with vision system
CN115375842A (en) Plant three-dimensional reconstruction method, terminal and storage medium
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
CN117036612A (en) Three-dimensional reconstruction method based on nerve radiation field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171024

RJ01 Rejection of invention patent application after publication