CN110223336B - Plane fitting method based on TOF camera data - Google Patents
Plane fitting method based on TOF camera data Download PDFInfo
- Publication number
- CN110223336B CN110223336B CN201910445779.0A CN201910445779A CN110223336B CN 110223336 B CN110223336 B CN 110223336B CN 201910445779 A CN201910445779 A CN 201910445779A CN 110223336 B CN110223336 B CN 110223336B
- Authority
- CN
- China
- Prior art keywords
- pixel
- point
- points
- pixel points
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention provides a plane fitting method based on TOF camera data, which comprises the following steps: s1: converting data of a depth map acquired by a TOF camera into data of point clouds, wherein corresponding numbers are reserved between pixel points of the depth map and the point clouds in a mapping process; s2: calculating the main direction vector of the point cloud corresponding to each pixel point of the full image of the depth image, and determining one sampling point and the corresponding main direction vector of the sampling point on the plane to be fitted; s3: gradually diffusing from the sampling points by an iteration method, finding out all pixel points to be fitted, the main direction vectors of which are corresponding to the sampling points, the relation of which meets preset conditions, and continuing to iteratively diffuse after updating the pixel points to be fitted into new sampling points each time; s4: and connecting all pixel points to be fitted which meet the preset requirements, and fitting the pixel points to be fitted into a plane. Each step of the method is optimized in terms of operand, and can be completed in real time on a 3D depth camera output video sequence.
Description
Technical Field
The invention relates to the technical field of computer image processing, in particular to a plane fitting method based on TOF camera data.
Background
Plane fitting is an important component of the field of 3D reconstruction and measurement. Since the planar features of the geometry can be used for point cloud registration, simplifying the data in subsequent modeling, planar fitting is the most fundamental step in scattered point cloud surface reconstruction, and most of the surface reconstruction algorithms are proposed based on planar fitting. In addition, fitting planes with regular features can also be used to measure the dimensions of three-dimensional geometric objects, and the shape and dimensions of geometric objects are visually identified and calculated by a 3D computer and used as input data for industrial control, which is an important basis for industrial automation and robot control.
Traditional plane fitting algorithms are based on the cloud of physical surface points obtained from a ground laser scan. The point cloud information obtained through laser scanning has large data quantity and high precision, and plays a certain role in helping plane fitting. However, the method of acquiring the point cloud by laser scanning has complex equipment and low acquisition efficiency and is generally applied to professional scenes. In daily life, TOF cameras are more convenient and faster and higher in efficiency than laser scanning methods due to the characteristics that the resolution is high and the distance information of each pixel in the whole visual field range can be obtained at one time, and are favored by 3D researchers. With the large-scale implantation of 3D-TOF cameras into mobile phones, the plane fitting algorithm application scene based on depth image data of the TOF cameras is greatly expanded. Compared with the point cloud data of laser scanning, the TOF camera restores the point cloud data through the acquired depth map, the precision is lower, the noise interference is more, the quality of the data is relatively poor, and therefore the traditional plane fitting algorithm based on the laser scanning is difficult to be qualified for working based on the TOF depth data.
However, the computation load of the existing plane fitting method based on TOF depth data is still large, and the real-time performance of the video sequence output on the 3D depth camera still needs to be further improved.
Disclosure of Invention
The invention aims to provide a plane fitting method based on TOF camera data, which aims to solve the problems of large operand and poor real-time performance of an output video sequence of the existing plane fitting method based on TOF depth data.
In order to realize plane fitting, we need to calculate all principal direction vectors containing planes and a large number of background pixel points, then determine the angles between each pixel and the direction of the sampling point according to the principal direction vectors, and judge whether the pixels are in the same plane to be fitted, and the steps are the ring-shaped links essential to the plane fitting algorithm based on the depth map.
In order to achieve the above object, the present invention provides a plane fitting method based on TOF camera data, comprising the steps of:
s1: converting data of a depth map acquired by a TOF camera into data of point clouds, wherein corresponding numbers are reserved between pixel points of the depth map and the point clouds in a mapping process;
s2: calculating the main direction vector of the point cloud corresponding to each pixel point of the full image of the depth image, and determining one sampling point and the corresponding main direction vector of the sampling point on the plane to be fitted;
s3: gradually diffusing from the sampling points by an iteration method, finding out all pixel points to be fitted, the main direction vectors of which are corresponding to the sampling points, the relation of which meets preset conditions, and continuing to iteratively diffuse after updating the pixel points to be fitted into new sampling points each time;
s4: and connecting all pixel points to be fitted which meet the preset requirements, and fitting the pixel points to be fitted into a plane.
Preferably, the step S1 specifically includes: obtaining a mathematical mapping relation between a camera coordinate system and a world coordinate system by means of camera internal parameters of the TOF camera, converting the depth map data into point cloud data according to the mathematical mapping relation, and recording corresponding numbers of the point cloud and pixel points in a one-to-one mapping process.
Preferably, the step S2 specifically includes: each pixel point in the depth map is taken as a neighborhood pixel point in a neighborhood region of the pixel point, and the optimal approximate principal direction vector of the point cloud corresponding to the neighborhood pixel point is obtained by solving an equation and is used as the principal direction vector of the pixel point; and manually confirming the sampling points on the plane to be fitted, and simultaneously recording the principal direction vectors of the sampling points.
Preferably, the step S3 includes:
s31: extracting the principal direction vector of the neighborhood pixel corresponding to the sampling point based on the principal direction vector of each pixel of the full graph, and respectively calculating the included angle between the principal direction vector of each pixel of the neighborhood and the principal direction vector of the sampling point;
s32: and extracting the neighborhood pixel points with the included angles smaller than the preset threshold value, taking the neighborhood pixel points as new sampling points, returning to the step S31 until all other pixel points in the neighborhood of the current latest sampling point do not meet the condition that the included angles between the corresponding main direction vector and the main direction vector of the sampling point are smaller than the preset threshold value, and completing traversing all the pixel points on the target fitting plane to obtain all the pixel points to be fitted.
Preferably, the step S4 specifically includes: and determining the pixel points in a connected domain between the pixel points and the sampling point based on the pixel points to be fitted, and outputting the pixel points meeting the conditions as the pixel points of a fitting plane.
The method is based on depth map data output by the TOF camera, and the point cloud corresponding to each pixel point in the depth map is utilized for analysis and calculation so as to achieve the purpose of space plane fitting. The method has small operand and good real-time performance of the output video sequence.
Drawings
FIG. 1 is a general flow chart of the method of the present invention;
FIG. 2 is a schematic view of a neighborhood region selected by the method of the present invention;
FIG. 3A is a schematic diagram of steps of iterative planar fitting;
FIG. 3B is a schematic diagram of steps of iterative planar fitting;
FIG. 3C is a schematic diagram of steps of iterative planar fitting;
FIG. 3D is a schematic diagram of steps of iterative planar fitting;
fig. 3E is a schematic diagram of steps of iterative planar fitting.
Detailed Description
The following description and the discussion of the embodiments of the present invention will be made more complete and less in view of the accompanying drawings, in which it is to be understood that the invention is not limited to the embodiments of the invention disclosed and that it is intended to cover all such modifications as fall within the scope of the invention.
For the purpose of facilitating an understanding of the embodiments of the present invention, reference will now be made to the drawings, by way of example, of specific embodiments, and the various embodiments should not be construed to limit the embodiments of the invention.
Referring to fig. 1, the embodiment provides a plane fitting method based on TOF camera data, including the following steps:
s1: converting the data of the depth map acquired by the TOF camera into data of point cloud, wherein corresponding numbers are reserved between pixel points of the depth map and the point cloud in the mapping process;
s2: calculating a main direction vector of point cloud corresponding to each pixel point of the full image of the depth image, and determining a sampling point and a corresponding main direction vector of the sampling point on a plane to be fitted;
s3: gradually diffusing from the sampling points by an iteration method, finding out pixel points to be fitted, the main direction vectors of which all correspond to the sampling points, have relationships meeting preset conditions, and continuing to iteratively diffuse after updating the pixel points to be fitted into new sampling points each time;
s4: and connecting all pixel points to be fitted which meet the preset requirements, and fitting the pixel points to be fitted into a plane.
The step S1 specifically includes: obtaining a mathematical mapping relation between a camera coordinate system and a world coordinate system by means of camera internal parameters of the TOF camera, restoring the depth map data into point cloud data according to the mathematical mapping relation, and recording corresponding numbers of point cloud and pixel points in a one-to-one mapping process. The following method is specifically adopted here:
each pixel of the traversal depth map is labeled in a manner of number = (row number-1) x total column number + (column number-1), and mapped into point cloud data according to the coordinate mapping relation of formula (1):
wherein x, y and z are the spatial coordinates of the point cloud, u and v are the pixel coordinates of the pixel point in the horizontal and vertical directions in the depth map, d is the depth corresponding to the pixel of the depth map at the (u, v) position, f x ,f y ,c x ,c y Is an internal reference of the camera, f x ,f y The focal length of the camera in the x-axis and y-axis directions, respectively, multiplied by the resolution of the photosensitive device, c x ,c y Is the coordinates of the center of the camera aperture. And reserving the number of the original pixel point for the point cloud mapped by each pixel point.
Further, the step S2 specifically includes: each pixel point in the depth map is taken as a neighborhood pixel point in a neighborhood region of the pixel point, and an optimal approximate main direction vector of a point cloud corresponding to the neighborhood pixel point is obtained by solving an equation, and is used as the main direction vector of the pixel point; and manually confirming the sampling points on the plane to be fitted, and simultaneously recording the principal direction vectors of the sampling points.
Specifically, calculating a statistical principal direction vector of the point cloud, wherein the statistical principal direction vector needs spatial coordinates of the point cloud and neighborhood information of depth map pixels, and step S2 firstly generates an image point cloud representation according to a mapping relation between pixel points and the point cloud, and generates a new image point cloud representation according to the point cloud-pixel numbering relation; then, the value of each pixel point in the depth map is replaced by the space coordinate of the point cloud corresponding to the pixel point to form a new matrix img_pcl, namely img_pcl (m, n) = (x) mn ,y mn ,z mn ) Wherein (m, n) represents the position of the pixel point, (x) mn ,y mn ,z mn ) The spatial position of the point cloud corresponding to the pixel point (m, n); the img_pcl is then split into three matrices pcx, pcy, pcz representing the x, y, z coordinates, respectively, i.e. pcx (m, n) =x mn ,pcy(m,n)=y mn ,pcz(m,n)=z mn . And respectively calculating the mean value of each pixel neighborhood region as a new value of the neighborhood central point for pcx, pcy and pcz to obtain new matrixes pcx_mean, pcy_mean and pcz_mean. Referring to fig. 2, the pixel neighborhood region of the present embodiment takes (0, 4), (-2, 2), (-4, 0), (0, 0), (4, 0), (-2, -2), (2, -2), (0, -4). The new matrices xx, yy, zz, xy, xz, yz are obtained by calculation according to formulas (2) - (7):
wherein (i, j) is the relative position of its neighborhood pixels, taken as (0, 4), (-2, 2), (-4, 0), (0, 0), (4, 0), (-2, -2), (2, -2), (0, -4). Calculating according to formulas (8) - (10) to obtain a main statistical direction vector of the point cloud corresponding to each pixel point:
nx(m,n)=[xy 2 (m,n)+yy 2 (m,n)+yz 2 (m,n)]
*[xx(m,n)*xz(m,n)+xy(m,n)*yz(m,n)+xz(m,n)*zz(m,n)]
-[xx(m,n)*xy(m,n)+xy(m,n)*yy(m,n)+xz(m,n)*yz(m,n)]
*[xy(m,n)*xz(m,n)+yy(m,n)*yz(m,n)+yz(m,n)*zz(m,n)]
(8)
ny(m,n)=[xx 2 (m,n)+xy 2 (m,n)+xz 2 (m,n)]
*[xy(m,n)*xz(m,n)+yy(m,n)*yz(m,n)+yz(m,n)*zz(m,n)]
-[xx(m,n)*xy(m,n)+xy(m,n)*yy(m,n)+xz(m,n)*yz(m,n)]
*[xx(m,n)*xz(m,n)+xy(m,n)*yz(m,n)+xz(m,n)*zz(m,n)]
(9)
nz(m,n)=[xx(m,n)*xy(m,n)+xy(m,n)*yy(m,n)+xz(m,n)*yz(m,n)] 2
-[xx 2 (m,n)+xy 2 (m,n)+xz 2 (m,n)]
*[xy 2 (m,n)+yy 2 (m,n)+yz 2 (m,n)]
(10)
wherein nx (m, n), ny (m, n), nz (m, n) respectively represent the vector x, y, z directions of the point cloud corresponding to the pixel point at the (m, n) position. For ease of calculation, the normalized unit vector is taken as the statistical principal direction vector of the point cloudReference formula (11):
further, step S3 includes:
s31: extracting the principal direction vector of the neighborhood pixel corresponding to the sampling point based on the principal direction vector of each pixel of the full graph, and respectively calculating the included angle between the principal direction vector of each pixel of the neighborhood and the principal direction vector of the sampling point;
s32: and extracting the neighborhood pixel points with the included angles smaller than the preset threshold value, taking the neighborhood pixel points as new sampling points, returning to the step S31 until all other pixel points in the neighborhood of the current latest sampling point do not meet the condition that the included angles between the corresponding main direction vector and the main direction vector of the sampling point are smaller than the preset threshold value, and completing traversing all the pixel points on the target fitting plane to obtain all the pixel points to be fitted.
Specifically, referring to fig. 3A to 3E, in the depth map in a simplified form, an irregular white area is a planar pixel to be fitted, a light gray area is a background pixel, and main direction vectors of point clouds corresponding to all pixel points have been acquired in step S2.
As shown in FIG. 3A, any point on the plane to be fitted is selected as a sampling point (black point), other pixel points in the 5-neighborhood region are selected, and the formula is passedRespectively calculating main direction vector of point cloud corresponding to neighborhood pixel points>Principal direction vector of point cloud corresponding to sampling point +.>Included angle theta of (2) ij . As shown in fig. 3B and 3C, if θ ij Greater than a preset threshold theta tk Then the neighborhood point (i, j) is considered not to be on the plane to be fitted, as in the dark gray region of fig. 3C; if theta is ij Less than a preset threshold theta tk And (3) considering that the neighborhood point (i, j) and the sampling point are in the same plane, such as a black area in fig. 3C, recording the number of the point, taking the point as the next sampling point, and repeating the above flow of the step S3. Until all the points with recorded numbers and all the pixel points in the neighborhood of the points do not meet the included angles of the main direction vectors and are smaller than a threshold value, the method considers that all the pixel points of the plane to be fitted are found, and the method is shown in fig. 3D.
Further preferably, the step S4 specifically includes: and determining the pixel points in which the sampling points are in a connected domain based on the pixel points to be fitted, and outputting the pixel points meeting the conditions as the pixel points of the fitting plane.
In order to realize plane fitting, all principal direction vectors containing planes and a large number of background pixel points are calculated firstly, then angles between all pixels and directions of sampling points are determined according to the principal direction vectors, whether the pixels are in the same plane to be fitted or not is judged, and fitting is carried out. The method is simple and easy to implement, the specific execution process of each step is optimized in terms of operand, and the method can be completed in real time on the 3D depth camera output video sequence.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any modification or replacement made by those skilled in the art within the scope of the present invention should be covered by the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (2)
1. A plane fitting method based on TOF camera data, comprising the steps of:
s1: converting data of a depth map acquired by a TOF camera into data of point clouds, wherein corresponding numbers are reserved between pixel points of the depth map and the point clouds in a mapping process;
the step S1 specifically comprises the following steps: obtaining a mathematical mapping relation between a camera coordinate system and a world coordinate system by means of camera internal parameters of the TOF camera, converting the depth map data into point cloud data according to the mathematical mapping relation, recording corresponding numbers of the point cloud and pixel points in a one-to-one mapping process, and specifically adopting the following method:
each pixel of the traversal depth map is labeled in a manner of number = (row number-1) x total column number + (column number-1), and mapped into point cloud data according to the coordinate mapping relation of formula (1):
wherein x, y and z are the spatial coordinates of the point cloud, u and v are the pixel coordinates of the pixel point in the horizontal and vertical directions in the depth map, d is the depth corresponding to the pixel of the depth map at the (u, v) position, f x ,f y ,c x ,c y Is an internal reference of the camera, f x ,f y The focal length of the camera in the x-axis and y-axis directions, respectively, multiplied by the resolution of the photosensitive device, c x ,c y The coordinate of the center of the aperture of the camera is the point cloud mapped by each pixel point, and the serial numbers of the original pixel points are reserved;
s2: calculating the main direction vector of the point cloud corresponding to each pixel point of the full image of the depth image, and determining one sampling point and the corresponding main direction vector of the sampling point on the plane to be fitted;
the step S2 specifically comprises the following steps: each pixel point in the depth map is taken as a neighborhood pixel point in a neighborhood region of the pixel point, and the optimal approximate principal direction vector of the point cloud corresponding to the neighborhood pixel point is obtained by solving an equation and is used as the principal direction vector of the pixel point; manually confirming the sampling points on the plane to be fitted, and simultaneously recording the main direction vectors of the sampling points;
step S2, firstly, generating an image point cloud representation form according to a mapping relation between pixel points and the point cloud, and generating a new image point cloud representation form according to the point cloud-pixel numbering relation; then, the value of each pixel point in the depth map is replaced by the space coordinate of the point cloud corresponding to the pixel point to form a new matrix img_pcl, namely img_pcl (m, n) = (x) mn ,y mn ,z mn ) Wherein (m, n) represents the position of the pixel point, (x) mn ,y mn ,z mn ) The spatial position of the point cloud corresponding to the pixel point (m, n); the img_pcl is then split into three matrices pcx, pcy, pcz representing the x, y, z coordinates, respectively, i.e. pcx (m, n) =x mn ,pcy(m,n)=y mn ,pcz(m,n)=z mn Respectively calculating the mean value of each pixel neighborhood region as a new value of the neighborhood center point for pcx, pcy and pcz to obtain new matrixes pcx_mean, pcy_mean and pcz_mean;
obtaining a new matrix xx, yy, zz, xy, xz, yz by calculation according to the formula (2) - (7):
wherein (i, j) is the relative position of its neighborhood pixels;
calculating according to formulas (8) - (10) to obtain a main statistical direction vector of the point cloud corresponding to each pixel point:
wherein nx (m, n), ny (m, n), nz (m, n) respectively represent the directions of vectors x, y and z of the point cloud corresponding to the pixel points at the (m, n) positions, and for the convenience of calculation, the normalized unit vector is taken as the statistical main direction vector of the point cloudReference formula (11):
s3: gradually diffusing from the sampling points by an iteration method, finding out all pixel points to be fitted, the main direction vectors of which are corresponding to the sampling points, the relation of which meets preset conditions, and continuing to iteratively diffuse after updating the pixel points to be fitted into new sampling points each time;
s4: connecting all pixel points to be fitted which meet preset requirements to be fitted into a plane;
wherein, step S3 includes:
s31: extracting the principal direction vector of the neighborhood pixel corresponding to the sampling point based on the principal direction vector of each pixel of the full graph, and respectively calculating the included angle between the principal direction vector of each pixel of the neighborhood and the principal direction vector of the sampling point;
s32: and extracting the neighborhood pixel points with the included angles smaller than the preset threshold value, taking the neighborhood pixel points as new sampling points, returning to the step S31 until all other pixel points in the neighborhood of the current latest sampling point do not meet the condition that the included angles between the corresponding main direction vector and the main direction vector of the sampling point are smaller than the preset threshold value, and completing traversing all the pixel points on the target fitting plane to obtain all the pixel points to be fitted.
2. The plane fitting method based on TOF camera data according to claim 1, wherein step S4 is specifically: and determining the pixel points in a connected domain between the pixel points and the sampling point based on the pixel points to be fitted, and outputting the pixel points meeting the conditions as the pixel points of a fitting plane.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910445779.0A CN110223336B (en) | 2019-05-27 | 2019-05-27 | Plane fitting method based on TOF camera data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910445779.0A CN110223336B (en) | 2019-05-27 | 2019-05-27 | Plane fitting method based on TOF camera data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110223336A CN110223336A (en) | 2019-09-10 |
CN110223336B true CN110223336B (en) | 2023-10-17 |
Family
ID=67818073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910445779.0A Active CN110223336B (en) | 2019-05-27 | 2019-05-27 | Plane fitting method based on TOF camera data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110223336B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2674913A1 (en) * | 2012-06-14 | 2013-12-18 | Softkinetic Software | Three-dimensional object modelling fitting & tracking. |
CN104952107A (en) * | 2015-05-18 | 2015-09-30 | 湖南桥康智能科技有限公司 | Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data |
CN105021124A (en) * | 2015-04-16 | 2015-11-04 | 华南农业大学 | Planar component three-dimensional position and normal vector calculation method based on depth map |
CN105046710A (en) * | 2015-07-23 | 2015-11-11 | 北京林业大学 | Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus |
CN105180830A (en) * | 2015-09-28 | 2015-12-23 | 浙江大学 | Automatic three-dimensional point cloud registration method applicable to ToF (Time of Flight) camera and system |
CN107220928A (en) * | 2017-05-31 | 2017-09-29 | 中国工程物理研究院应用电子学研究所 | A kind of tooth CT image pixel datas are converted to the method for 3D printing data |
CN108335325A (en) * | 2018-01-30 | 2018-07-27 | 上海数迹智能科技有限公司 | A kind of cube method for fast measuring based on depth camera data |
CN108876799A (en) * | 2018-06-12 | 2018-11-23 | 杭州视氪科技有限公司 | A kind of real-time step detection method based on binocular camera |
CN109029284A (en) * | 2018-06-14 | 2018-12-18 | 大连理工大学 | A kind of three-dimensional laser scanner based on geometrical constraint and camera calibration method |
CN109087325A (en) * | 2018-07-20 | 2018-12-25 | 成都指码科技有限公司 | A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method |
CN109255813A (en) * | 2018-09-06 | 2019-01-22 | 大连理工大学 | A kind of hand-held object pose real-time detection method towards man-machine collaboration |
CN109544677A (en) * | 2018-10-30 | 2019-03-29 | 山东大学 | Indoor scene main structure method for reconstructing and system based on depth image key frame |
CN109556540A (en) * | 2018-11-07 | 2019-04-02 | 西安电子科技大学 | A kind of contactless object plane degree detection method based on 3D rendering, computer |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140363073A1 (en) * | 2013-06-11 | 2014-12-11 | Microsoft Corporation | High-performance plane detection with depth camera data |
-
2019
- 2019-05-27 CN CN201910445779.0A patent/CN110223336B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2674913A1 (en) * | 2012-06-14 | 2013-12-18 | Softkinetic Software | Three-dimensional object modelling fitting & tracking. |
CN105021124A (en) * | 2015-04-16 | 2015-11-04 | 华南农业大学 | Planar component three-dimensional position and normal vector calculation method based on depth map |
CN104952107A (en) * | 2015-05-18 | 2015-09-30 | 湖南桥康智能科技有限公司 | Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data |
CN105046710A (en) * | 2015-07-23 | 2015-11-11 | 北京林业大学 | Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus |
CN105180830A (en) * | 2015-09-28 | 2015-12-23 | 浙江大学 | Automatic three-dimensional point cloud registration method applicable to ToF (Time of Flight) camera and system |
CN107220928A (en) * | 2017-05-31 | 2017-09-29 | 中国工程物理研究院应用电子学研究所 | A kind of tooth CT image pixel datas are converted to the method for 3D printing data |
CN108335325A (en) * | 2018-01-30 | 2018-07-27 | 上海数迹智能科技有限公司 | A kind of cube method for fast measuring based on depth camera data |
CN108876799A (en) * | 2018-06-12 | 2018-11-23 | 杭州视氪科技有限公司 | A kind of real-time step detection method based on binocular camera |
CN109029284A (en) * | 2018-06-14 | 2018-12-18 | 大连理工大学 | A kind of three-dimensional laser scanner based on geometrical constraint and camera calibration method |
CN109087325A (en) * | 2018-07-20 | 2018-12-25 | 成都指码科技有限公司 | A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method |
CN109255813A (en) * | 2018-09-06 | 2019-01-22 | 大连理工大学 | A kind of hand-held object pose real-time detection method towards man-machine collaboration |
CN109544677A (en) * | 2018-10-30 | 2019-03-29 | 山东大学 | Indoor scene main structure method for reconstructing and system based on depth image key frame |
CN109556540A (en) * | 2018-11-07 | 2019-04-02 | 西安电子科技大学 | A kind of contactless object plane degree detection method based on 3D rendering, computer |
Non-Patent Citations (2)
Title |
---|
基于点的绘制技术的研究与实现;郝传刚;《中国优秀硕士学位论文全文数据库信息科技辑》;20091015(第10期);全文 * |
随机抽样一致性平面拟合及其应用研究;周春霖等;《计算机工程与应用》;20110415(第7期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110223336A (en) | 2019-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108665536B (en) | Three-dimensional and live-action data visualization method and device and computer readable storage medium | |
CN110853075B (en) | Visual tracking positioning method based on dense point cloud and synthetic view | |
AU2011312140B2 (en) | Rapid 3D modeling | |
CN110189399B (en) | Indoor three-dimensional layout reconstruction method and system | |
WO2018201677A1 (en) | Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system | |
CN114399554A (en) | Calibration method and system of multi-camera system | |
CN111815707A (en) | Point cloud determining method, point cloud screening device and computer equipment | |
CN111915723A (en) | Indoor three-dimensional panorama construction method and system | |
CN115375868B (en) | Map display method, remote sensing map display method, computing device and storage medium | |
CN112946679B (en) | Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence | |
CN110942506A (en) | Object surface texture reconstruction method, terminal device and system | |
CN116129037B (en) | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof | |
CN110738730A (en) | Point cloud matching method and device, computer equipment and storage medium | |
CN113140034A (en) | Room layout-based panoramic new view generation method, device, equipment and medium | |
CN115131494A (en) | Optical remote sensing satellite imaging simulation method and device | |
CN115908554A (en) | High-precision sub-pixel simulation star map and sub-pixel extraction method | |
CN113421217A (en) | Method and device for detecting travelable area | |
CN117671031A (en) | Binocular camera calibration method, device, equipment and storage medium | |
CN111861873B (en) | Method and device for generating simulation image | |
CN110223336B (en) | Plane fitting method based on TOF camera data | |
CN116630953A (en) | Monocular image 3D target detection method based on nerve volume rendering | |
CN116704112A (en) | 3D scanning system for object reconstruction | |
CN116642490A (en) | Visual positioning navigation method based on hybrid map, robot and storage medium | |
CN110706288A (en) | Target detection method, device, equipment and readable storage medium | |
CA3208822A1 (en) | Systems and methods for roof area and slope estimation using a point set |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |