CN110223336A - A kind of planar fit method based on TOF camera data - Google Patents

A kind of planar fit method based on TOF camera data Download PDF

Info

Publication number
CN110223336A
CN110223336A CN201910445779.0A CN201910445779A CN110223336A CN 110223336 A CN110223336 A CN 110223336A CN 201910445779 A CN201910445779 A CN 201910445779A CN 110223336 A CN110223336 A CN 110223336A
Authority
CN
China
Prior art keywords
pixel
point
principal direction
direction vector
sampled point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910445779.0A
Other languages
Chinese (zh)
Other versions
CN110223336B (en
Inventor
应忍冬
陈琢
刘佩林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201910445779.0A priority Critical patent/CN110223336B/en
Publication of CN110223336A publication Critical patent/CN110223336A/en
Application granted granted Critical
Publication of CN110223336B publication Critical patent/CN110223336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of planar fit methods based on TOF camera data, comprising the following steps: S1: by the data conversion of the depth map of TOF camera acquisition at the data of cloud, the pixel of the depth map retains reference numeral in mapping process with described cloud;S2: the principal direction vector of the full figure corresponding cloud of each pixel of the depth map is calculated, and determines a sampled point in the plane to be fitted and its corresponding principal direction vector;S3: it is gradually spread from the sampled point by iterative method, relationship meets spreading to match pixel point, and by the iteration that continues after each iteration of match pixel point is updated to new sampled point for preset condition between finding whole principal direction vectors corresponding with the sampled point;S4: connection is all to meet being fitted to match pixel point as plane for preset requirement.The each step of this method is optimized on operand, can export in 3D depth camera and complete in real time on video sequence.

Description

A kind of planar fit method based on TOF camera data
Technical field
The present invention relates to computer image processing technology field, in particular to a kind of plane based on TOF camera data is quasi- Conjunction method.
Background technique
Plane fitting is the important component of 3D reconstruction and fields of measurement.Since the plane characteristic of solid can be used for The registration of point cloud, simplifies data, therefore plane fitting is step most basic in dispersion point cloud resurfacing in subsequent modeling, The it is proposed of most of resurfacing algorithm is all based on plane fitting.In addition, to the plane with rule feature It is fitted the size that can be also used for measurement three-dimensional geometry object, passes through 3D Computer Vision Recognition and the shape of computational geometry body Shape size is the important foundation of industrial automation and robot control as the input data of Industry Control.
Traditional plane fitting algorithm is based on territorial laser scanning surface point cloud in kind obtained.It is obtained by laser scanning The point cloud information data volume arrived is big, precision is high, plays a degree of help to plane fitting.However, laser scanning The device is complicated, collecting efficiency is low for the mode of acquisition point cloud, is generally used for professional scene.In daily life, TOF camera due to Its high resolution can once obtain the characteristics of range information of each pixel within the scope of whole visual field, than the side of laser scanning Method is more convenient, more efficient, more the favor by 3D researchers.As 3D-TOF camera is implanted into mobile phone on a large scale, it is based on The plane fitting algorithm application scenarios of the depth image data of TOF camera expand significantly.TOF camera passes through the depth map of acquisition also Former point cloud data, for the point cloud data of laser scanning, precision is lower, noise jamming is more, the quality of data is opposite It is poor, thus the conventional planar fitting algorithm based on laser scanning is difficult the competent work based on TOF depth data.
But the existing planar fit method operand based on TOF depth data is still larger, defeated on 3D depth camera The real-time of video sequence out still needs to further improve.
Summary of the invention
The purpose of the present invention is to provide a kind of planar fit methods based on TOF camera data, to solve existing base The poor problem of larger in the planar fit method operand of TOF depth data, output video sequence real-time.
In order to realize plane fitting, we are the main side of whole firstly the need of calculating comprising plane and a large amount of background pixel points To vector, the angle between each pixel and sampled point direction is then determined according to principal direction vector, judge whether in it is same to Fit Plane, these steps are all linked with one another, are the indispensable links of the plane fitting algorithm based on depth map.
To achieve the above object, the present invention provides a kind of planar fit method based on TOF camera data, including it is following Step:
S1: by the data conversion of the depth map of TOF camera acquisition at the data of cloud, the pixel of the depth map and institute It states a cloud and retains reference numeral in mapping process;
S2: the principal direction vector of the full figure corresponding cloud of each pixel of the depth map is calculated, and determination to be fitted Plane on a sampled point and its corresponding principal direction vector;
S3: gradually spreading from the sampled point by iterative method, finds all main sides corresponding with the sampled point To relationship between vector meet preset condition to match pixel point, and by it is described be updated to each iteration of match pixel point it is new Continue iteration diffusion after sampled point;
S4: connection is all to meet being fitted to match pixel point as plane for preset requirement.
Preferably, the step S1 specifically: by the camera internal reference of the TOF camera, obtain camera coordinates system and generation The mathe-matical map relationship of boundary's coordinate system, and the depth map data is converted by point cloud data according to the mathe-matical map relationship, Record the reference numeral of cloud and pixel in mapping process one by one simultaneously at described.
Preferably, the step S2 specifically: taken in its neighborhood region to each of depth map pixel Neighborhood territory pixel point, by the method for solving equation obtain the optimal approximation principal direction of described corresponding cloud of neighborhood territory pixel point to Amount, the principal direction vector as the pixel;And the sampled point in the manual confirmation plane to be fitted, while recording institute State the principal direction vector of sampled point.
Preferably, the step S3 includes:
S31: the principal direction vector based on each pixel of the full figure extracts the corresponding neighborhood territory pixel point of the sampled point Principal direction vector calculates separately the folder of the principal direction vector of each pixel of the neighborhood and the principal direction vector of the sampled point Angle;
S32: the neighborhood territory pixel point that the angle is less than predetermined threshold is extracted, new sampled point is re-used as, returns to step Rapid S31, until its corresponding principal direction vector and sampling is not satisfied in other pixels of current newest sampling neighborhood of a point The angle of the principal direction vector of point is less than predetermined threshold, then completes whole pixels in traversal target fit Plane, obtains complete Portion to match pixel point.
Preferably, the step S4 specifically: based on described to match pixel point, determine that it neutralizes the sampled point and is in The pixel of one connected domain is exported the pixel for the condition that meets as the pixel of fit Plane.
The depth map data that the method for the present invention is exported based on TOF camera utilizes the corresponding point of pixel each in depth map Cloud carries out analytical calculation to achieve the purpose that space plane is fitted.This method operand is smaller, output video sequence real-time Property is preferable.
Detailed description of the invention
Fig. 1 is the method for the present invention overview flow chart;
Fig. 2 is the neighborhood area schematic that the method for the present invention is chosen;
Fig. 3 A is each step schematic diagram of iterative method plane fitting;
Fig. 3 B is each step schematic diagram of iterative method plane fitting;
Fig. 3 C is each step schematic diagram of iterative method plane fitting;
Fig. 3 D is each step schematic diagram of iterative method plane fitting;
Fig. 3 E is each step schematic diagram of iterative method plane fitting.
Specific embodiment
Below with reference to attached drawing of the invention, the technical scheme in the embodiment of the invention is clearly and completely described And discussion, it is clear that as described herein is only a part of example of the invention, is not whole examples, based on the present invention In embodiment, those of ordinary skill in the art's every other implementation obtained without making creative work Example, belongs to protection scope of the present invention.
For the ease of the understanding to the embodiment of the present invention, make by taking specific embodiment as an example below in conjunction with attached drawing further It illustrates, and each embodiment does not constitute the restriction to the embodiment of the present invention.
Refering to what is shown in Fig. 1, present embodiments providing a kind of planar fit method based on TOF camera data, including following Step:
S1: by TOF camera acquisition depth map data conversion at cloud data, wherein the pixel of depth map with Point cloud retains reference numeral in mapping process;
S2: calculating the principal direction vector of the full figure corresponding cloud of each pixel of depth map, and determines that is be fitted puts down A sampled point and its corresponding principal direction vector on face;
S3: gradually spreading from above-mentioned sampled point by iterative method, finds all principal direction corresponding with sampled point Between vector relationship meet preset condition to match pixel point, and new sampled point will be updated to each iteration of match pixel point Continue iteration diffusion afterwards;
S4: connection is all to meet being fitted to match pixel point as plane for preset requirement.
Wherein, step S1 specifically: by the camera internal reference of TOF camera, obtain camera coordinates system and world coordinate system Mathe-matical map relationship, and the depth map data is reduced by point cloud data according to the mathe-matical map relationship, while one by one The reference numeral of record point cloud and pixel in mapping process.Here specifically with the following method:
Each pixel for traversing depth map, in the way of number=(line number -1) × total columns+(columns -1) into Line label, the coordinate mapping relations according to formula (1) map it onto point cloud data:
Wherein, x, y, z are the space coordinate of point cloud, u and v be respectively pixel in depth map both horizontally and vertically Pixel coordinate, d be the corresponding depth of pixel of the depth map in the position (u, v), fx, fy, cx, cyIt is the internal reference of camera, fx, fyPoint It is not the resolution ratio that focal length of the camera in x-axis and y-axis direction multiplies sensor devices, cx, cyIt is the coordinate at camera aperture center.It is right The point cloud that each pixel is mapped to retains the number of preimage vegetarian refreshments.
Further, above-mentioned step S2 specifically: each of depth map pixel is taken in its neighborhood region Neighborhood territory pixel point obtains the optimal approximation principal direction vector of corresponding cloud of neighborhood territory pixel point by the method for solving equation, makees For the principal direction vector of the pixel;And the sampled point in the manual confirmation plane to be fitted, while recording the sampled point Principal direction vector.
Specifically, the statistics principal direction vector for calculating point cloud needs the space coordinate of a cloud and the neighborhood letter of depth map pixel Breath, then step S2 first according to pixel and point cloud mapping relations generate picture point cloud representation, here according to cloud-as Plain number relationship generates new picture point cloud representation;Then, the value of pixel each in depth map is changed to the pixel The space coordinate of corresponding cloud of point, forms new matrix img_pcl, i.e. img_pcl (m, n)=(xmn, ymn, zmn), wherein (m, n) indicates the position of pixel, (xmn, ymn, zmn) be pixel (m, n) corresponding points cloud spatial position;Img_ is split again Pcl is three and respectively indicates x, y, matrix pcx, pcy, the pcz of z coordinate, i.e. pcx (m, n)=xmn, pcy (m, n)=ymn, pcz (m, n)=zmn.And the mean value in each neighborhood of pixels region is calculated separately as the new of the centre of neighbourhood point to pcx, pcy, pcz Value, obtains new matrix pcx_mean, pcy_mean, pcz_mean.Refering to what is shown in Fig. 2, the neighborhood of pixels region of the present embodiment takes (0,4), (- 2,2), (2,2), (- 4,0), (0,0), (4,0), (- 2, -2), (2, -2), (0, -4).It is counted according to formula (2)-(7) It calculates and obtains new matrix xx, yy, zz, xy, xz, yz:
Wherein, (i, j) is the relative position of its neighborhood territory pixel, ibid takes (0,4), (- 2,2), (2,2), (- 4,0), (0, 0), (4,0), (- 2, -2), (2, -2), (0, -4).Each pixel corresponding points cloud statistics master is calculated according to formula (8)-(10) Direction vector:
Nx (m, n)=[xy2(m, n)+yy2(m, n)+yz2(m, n)]
* [xx (m, n) * xz (m, n)+xy (m, n) * yz (m, n)+xz (m, n) * zz (m, n)]
[xx (m, n) * xy (m, n)+xy (m, n) * yy (m, n)+xz (m, n) * yz (m, n)]
* [xy (m, n) * xz (m, n)+yy (m, n) * yz (m, n)+yz (m, n) * zz (m, n)]
(8)
Ny (m, n)=[xx2(m, n)+xy2(m, n)+xz2(m, n)]
* [xy (m, n) * xz (m, n)+yy (m, n) * yz (m, n)+yz (m, n) * zz (m, n)]
[xx (m, n) * xy (m, n)+xy (m, n) * yy (m, n)+xz (m, n) * yz (m, n)]
* [xx (m, n) * xz (m, n)+xy (m, n) * yz (m, n)+xz (m, n) * zz (m, n)]
(9)
Nz (m, n)=[xx (m, n) * xy (m, n)+xy (m, n) * yy (m, n)+xz (m, n) * yz (m, n)]2
-[xx2(m, n)+xy2(m, n)+xz2(m, n)]
*[xy2(m, n)+yy2(m, n)+yz2(m, n)]
(10)
Wherein, nx (m, n), ny (m, n), nz (m, n) respectively indicate the vector x of the pixel corresponding points cloud of the position (m, n), The direction y, z.To take normalized unit vector as the statistics principal direction vector of point cloud here convenient for calculatingWith reference to public affairs Formula (11):
Then further, step S3 includes:
S31: the principal direction vector based on each pixel of the full figure extracts the corresponding neighborhood territory pixel point of the sampled point Principal direction vector calculates separately the angle of the principal direction vector of each pixel of neighborhood and the principal direction vector of the sampled point;
S32: the neighborhood territory pixel point that the angle is less than predetermined threshold is extracted, new sampled point is re-used as, returns to step Rapid S31, until its corresponding principal direction vector and sampling is not satisfied in other pixels of current newest sampling neighborhood of a point The angle of the principal direction vector of point is less than predetermined threshold, then completes whole pixels in traversal target fit Plane, obtains complete Portion to match pixel point.
Specifically, with reference to shown in Fig. 3 A to 3E, in the depth map of reduced form, irregular white area is wait be fitted Planar pixel, light gray areas are background pixel, obtained in step s 2 the principal direction of corresponding cloud of whole pixels to Amount.
As shown in Figure 3A, it chooses to any point in fit Plane as sampled point (black color dots), takes its 5- neighborhood region Other pixels, pass through formulaCalculate separately the principal direction vector of neighborhood territory pixel point corresponding points cloud With the principal direction vector of sampled point corresponding points cloudAngle thetaij.As shown in Fig. 3 B and Fig. 3 C, if θijGreater than preset Thresholding θtk, then it is assumed that neighborhood point (i, j) is not on to fit Plane, such as the dark gray areas of Fig. 3 C;If θijLess than setting in advance Fixed thresholding θtk, then it is assumed that neighborhood point (i, j) and sampled point are in same plane, such as the black region of Fig. 3 C, record the point Number, and using the point as next sampled point, repeat the above-mentioned process of step S3.Until being all recorded its neighbour of the point of number When the angle that principal direction vector is not satisfied in the pixel of domain whole is less than threshold value, this method thinks to have found to fit Plane Whole pixels, as shown in Figure 3D.
It is further preferred that above-mentioned step S4 specifically: be based on determining that its neutralizes the sampled point to match pixel point It is in the pixel of a connected domain, is exported the pixel for the condition that meets as the pixel of fit Plane.
In order to realize plane fitting, calculate first here whole principal directions comprising plane and a large amount of background pixel points to Amount, then determines the angle between each pixel and sampled point direction according to principal direction vector, judges whether in same wait be fitted Plane, then be fitted.This method is simple and easy, and the specific implementation procedure of each step is optimized on operand, It can export in 3D depth camera and be completed in real time on video sequence.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those skilled in the art in the technical scope disclosed by the present invention, to deformation or replacement that the present invention is done, should be covered Within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the scope of protection of the claims.

Claims (5)

1. a kind of planar fit method based on TOF camera data, which comprises the following steps:
S1: by the data conversion of the depth map of TOF camera acquisition at the data of cloud, the pixel of the depth map and the point Cloud retains reference numeral in mapping process;
S2: calculating the principal direction vector of the full figure corresponding cloud of each pixel of the depth map, and determines that is be fitted puts down A sampled point and its corresponding principal direction vector on face;
S3: gradually spreading from the sampled point by iterative method, find all principal direction corresponding with the sampled point to What relationship met preset condition between amount is updated to new sampling to each iteration of match pixel point to match pixel point, and by described Continue iteration diffusion after point;
S4: connection is all to meet being fitted to match pixel point as plane for preset requirement.
2. the planar fit method according to claim 1 based on TOF camera data, which is characterized in that the step S1 Specifically: by the camera internal reference of the TOF camera, the mathe-matical map relationship of camera coordinates system and world coordinate system is obtained, and The depth map data is converted into point cloud data according to the mathe-matical map relationship, while recording institute in mapping process one by one State the reference numeral of a cloud and pixel.
3. the planar fit method according to claim 1 based on TOF camera data, which is characterized in that the step S2 Specifically: the neighborhood territory pixel point in its neighborhood region is taken to each of depth map pixel, by solving equation Method obtains the optimal approximation principal direction vector of described corresponding cloud of neighborhood territory pixel point, as the pixel principal direction to Amount;And the sampled point in the manual confirmation plane to be fitted, while recording the principal direction vector of the sampled point.
4. the planar fit method according to claim 3 based on TOF camera data, which is characterized in that the step S3 Include:
S31: the principal direction vector based on each pixel of the full figure extracts the main side of the corresponding neighborhood territory pixel point of the sampled point To vector, the angle of the principal direction vector of each pixel of the neighborhood and the principal direction vector of the sampled point is calculated separately;
S32: the neighborhood territory pixel point that the angle is less than predetermined threshold is extracted, new sampled point, return step are re-used as S31, until its corresponding principal direction vector and sampled point is not satisfied in other pixels of current newest sampling neighborhood of a point Principal direction vector angle be less than predetermined threshold, then complete traversal target fit Plane on whole pixels, obtain whole To match pixel point.
5. the planar fit method according to claim 3 based on TOF camera data, which is characterized in that the step S4 Specifically: based on described to match pixel point, determine that it neutralizes the pixel that the sampled point is in a connected domain, will meet The pixel of condition is exported as the pixel of fit Plane.
CN201910445779.0A 2019-05-27 2019-05-27 Plane fitting method based on TOF camera data Active CN110223336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910445779.0A CN110223336B (en) 2019-05-27 2019-05-27 Plane fitting method based on TOF camera data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910445779.0A CN110223336B (en) 2019-05-27 2019-05-27 Plane fitting method based on TOF camera data

Publications (2)

Publication Number Publication Date
CN110223336A true CN110223336A (en) 2019-09-10
CN110223336B CN110223336B (en) 2023-10-17

Family

ID=67818073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910445779.0A Active CN110223336B (en) 2019-05-27 2019-05-27 Plane fitting method based on TOF camera data

Country Status (1)

Country Link
CN (1) CN110223336B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2674913A1 (en) * 2012-06-14 2013-12-18 Softkinetic Software Three-dimensional object modelling fitting & tracking.
US20140363073A1 (en) * 2013-06-11 2014-12-11 Microsoft Corporation High-performance plane detection with depth camera data
CN104952107A (en) * 2015-05-18 2015-09-30 湖南桥康智能科技有限公司 Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data
CN105021124A (en) * 2015-04-16 2015-11-04 华南农业大学 Planar component three-dimensional position and normal vector calculation method based on depth map
CN105046710A (en) * 2015-07-23 2015-11-11 北京林业大学 Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus
CN105180830A (en) * 2015-09-28 2015-12-23 浙江大学 Automatic three-dimensional point cloud registration method applicable to ToF (Time of Flight) camera and system
CN107220928A (en) * 2017-05-31 2017-09-29 中国工程物理研究院应用电子学研究所 A kind of tooth CT image pixel datas are converted to the method for 3D printing data
CN108335325A (en) * 2018-01-30 2018-07-27 上海数迹智能科技有限公司 A kind of cube method for fast measuring based on depth camera data
CN108876799A (en) * 2018-06-12 2018-11-23 杭州视氪科技有限公司 A kind of real-time step detection method based on binocular camera
CN109029284A (en) * 2018-06-14 2018-12-18 大连理工大学 A kind of three-dimensional laser scanner based on geometrical constraint and camera calibration method
CN109087325A (en) * 2018-07-20 2018-12-25 成都指码科技有限公司 A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN109544677A (en) * 2018-10-30 2019-03-29 山东大学 Indoor scene main structure method for reconstructing and system based on depth image key frame
CN109556540A (en) * 2018-11-07 2019-04-02 西安电子科技大学 A kind of contactless object plane degree detection method based on 3D rendering, computer

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2674913A1 (en) * 2012-06-14 2013-12-18 Softkinetic Software Three-dimensional object modelling fitting & tracking.
US20140363073A1 (en) * 2013-06-11 2014-12-11 Microsoft Corporation High-performance plane detection with depth camera data
CN105021124A (en) * 2015-04-16 2015-11-04 华南农业大学 Planar component three-dimensional position and normal vector calculation method based on depth map
CN104952107A (en) * 2015-05-18 2015-09-30 湖南桥康智能科技有限公司 Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data
CN105046710A (en) * 2015-07-23 2015-11-11 北京林业大学 Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus
CN105180830A (en) * 2015-09-28 2015-12-23 浙江大学 Automatic three-dimensional point cloud registration method applicable to ToF (Time of Flight) camera and system
CN107220928A (en) * 2017-05-31 2017-09-29 中国工程物理研究院应用电子学研究所 A kind of tooth CT image pixel datas are converted to the method for 3D printing data
CN108335325A (en) * 2018-01-30 2018-07-27 上海数迹智能科技有限公司 A kind of cube method for fast measuring based on depth camera data
CN108876799A (en) * 2018-06-12 2018-11-23 杭州视氪科技有限公司 A kind of real-time step detection method based on binocular camera
CN109029284A (en) * 2018-06-14 2018-12-18 大连理工大学 A kind of three-dimensional laser scanner based on geometrical constraint and camera calibration method
CN109087325A (en) * 2018-07-20 2018-12-25 成都指码科技有限公司 A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN109544677A (en) * 2018-10-30 2019-03-29 山东大学 Indoor scene main structure method for reconstructing and system based on depth image key frame
CN109556540A (en) * 2018-11-07 2019-04-02 西安电子科技大学 A kind of contactless object plane degree detection method based on 3D rendering, computer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周春霖等: "随机抽样一致性平面拟合及其应用研究", 《计算机工程与应用》 *
郝传刚: "基于点的绘制技术的研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Also Published As

Publication number Publication date
CN110223336B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN112505065B (en) Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN107679537B (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching
Bignone et al. Automatic extraction of generic house roofs from high resolution aerial imagery
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN111754552A (en) Multi-camera cooperative target tracking method based on deep learning
CN107886547B (en) Fisheye camera calibration method and system
CN101777129B (en) Image matching method based on feature detection
CN110443898A (en) A kind of AR intelligent terminal target identification system and method based on deep learning
CN105719352B (en) Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment
CN113298947B (en) Substation three-dimensional modeling method medium and system based on multi-source data fusion
WO2020237516A1 (en) Point cloud processing method, device, and computer readable storage medium
CN112927302B (en) Calibration plate and calibration method for combined calibration of multi-line laser radar and camera
CN109242959A (en) Method for reconstructing three-dimensional scene and system
CN112132876B (en) Initial pose estimation method in 2D-3D image registration
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN109118544A (en) Synthetic aperture imaging method based on perspective transform
CN112862736B (en) Real-time three-dimensional reconstruction and optimization method based on points
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
CN111860651A (en) Monocular vision-based semi-dense map construction method for mobile robot
CN112581505B (en) Simple automatic registration method for laser radar point cloud and optical image
CN112509110A (en) Automatic image data set acquisition and labeling framework for land confrontation intelligent agent
CN109215122B (en) Streetscape three-dimensional reconstruction system and method and intelligent trolley
CN110223336A (en) A kind of planar fit method based on TOF camera data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant