CN111292326A - Volume measurement method and system based on 3D depth camera - Google Patents

Volume measurement method and system based on 3D depth camera Download PDF

Info

Publication number
CN111292326A
CN111292326A CN202010290597.3A CN202010290597A CN111292326A CN 111292326 A CN111292326 A CN 111292326A CN 202010290597 A CN202010290597 A CN 202010290597A CN 111292326 A CN111292326 A CN 111292326A
Authority
CN
China
Prior art keywords
point set
tsdf
camera
value
lattice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010290597.3A
Other languages
Chinese (zh)
Inventor
李伟
李骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Publication of CN111292326A publication Critical patent/CN111292326A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a volume measurement method and system based on a 3D depth camera. The method can measure the object volume by shooting the depth image by using the 3D depth camera, has small dependence on manpower and high speed, adopts ICP (inductively coupled plasma) registration and TSDF (Total differential diffusion) to reconstruct the object surface, has high calculation precision, is not limited by the shape of the object, and can be suitable for measuring the irregular object volume; and under different demand scenes, the user can adjust the number of the cubic lattices according to the actual use scene to control the precision, so that the method is convenient to popularize and can be widely applied to the field of object measurement in the express delivery industry and the like.

Description

Volume measurement method and system based on 3D depth camera
Technical Field
The invention relates to photogrammetry, in particular to a volume measurement method and system based on a 3D depth camera.
Background
In production and life, the length and the area of an object in reality need to be measured, for example, the volume of a package and other measured objects needs to be measured in the transportation process in the logistics industry, and a manual measurement mode carried by a professional measurement tool depends on the tool, and the method is old, tedious and low in efficiency. In the prior art, when the area of an object is measured by using a 3D depth camera shooting technology, a user needs to calculate the area according to rough side length information among points to be measured, the method has the advantages of low measurement speed and low precision, the shape of the object is relatively limited, the object is a regular polygonal object, and the number of sides is small.
Disclosure of Invention
The purpose of the invention is as follows: in view of the above-mentioned drawbacks of the prior art, the present invention aims to provide a volume measurement method and system based on a 3D depth camera.
The technical scheme is as follows: a volume measurement method based on a 3D depth camera comprises the following steps:
(1) acquiring depth images of the top and the periphery of an object to be detected;
(2) determining a reconstruction space region, and dividing the reconstruction space into n3 cubic lattices;
(3) calculating a camera displacement parameter and a rotation parameter between shooting poses of the depth image shot in the step (1) by using an ICP (inductively coupled plasma) registration method;
(4) reconstructing the surface of the object by using TSDF according to the displacement parameter and the rotation parameter of the camera, and judging whether each cubic lattice is outside, inside or on the surface of the object;
(5) the volume of the object is calculated from the total number of cubic lattices on the surface of the object and inside the object.
Further, the step (3) is specifically as follows: according to the sequence of the images shot in the step (1), starting from the frame 2, each frame is compared with the previous frame, and the camera displacement parameter and the rotation parameter between the shooting poses of the two frames are calculated:
(3.1) setting the current frame as a target depth map and the previous frame as an original depth map;
(3.2) calculating three-dimensional coordinates of all points in the original depth map and the target depth map to generate an original point set and a target point set;
(3.3) for each point in the original point set, searching out the corresponding point closest to the point in the target point set, and combining the corresponding points in the target point set into a new point set;
(3.4) calculating a camera displacement parameter and a rotation parameter between the original point set and the new point set;
(3.5) for the original point set, carrying out coordinate transformation by using the camera displacement parameter and the rotation parameter to obtain a transformation point set;
(3.6) calculating the root mean square error between the transformation point set and the new point set, and taking the camera displacement parameter and the rotation parameter in the step (3.4) as results when the root mean square error is smaller than a preset threshold value; otherwise, the transformed point set is used as the original point set, and the step (3.3) is returned.
Further, the step (3.4) is to specifically calculate the camera displacement parameters and rotation parameters between the original point set and the new point set by using a minimum root mean square method.
Further, the method for obtaining the transformation point set in step (3.5) is as follows: U-Rp 1+ T, where U is the set of transformed points, p1 is the set of original points, T is the camera displacement parameters, and R is the rotation parameters.
Further, the step (4) adopts the following method:
(4.1) traversing all lattices in each frame of image, acquiring coordinates of each lattice in a world coordinate system, and converting the coordinates into coordinates in a camera coordinate system according to the camera displacement parameters and the rotation parameters obtained in the step (3);
(4.2) converting the coordinates under the camera coordinate system into two-dimensional coordinates under the image coordinate system according to the camera internal parameters;
(4.3) skipping TSDF calculation of the crystal lattice when the depth value of the coordinate point of the crystal lattice under the image coordinate system is 0;
when the depth value of a coordinate point of the crystal lattice in the image coordinate system is not 0, iteratively correcting the TSDF value of the crystal lattice during traversal, and judging that the crystal lattice is outside, inside or on the surface of the object to be detected according to the TSDF value when the traversal is finished;
further, in the step (4.3), the method for iteratively correcting the TSDF value of the lattice during traversal and determining that the lattice is on the external, internal or surface of the object to be measured according to the TSDF value at the end of traversal includes:
the initial value of TSDF is set to: the difference value between the distance from the coordinates of the center point of the crystal lattice under the world coordinate system to the origin of the camera of the frame and the depth value of the coordinate point of the current crystal lattice under the image coordinate system;
setting the weighted value of the current calculation of the crystal lattice;
the new TSDF value for this lattice was calculated weighted:
TSDF_new=(TSDF+TSDF_old*wi_old)/(wi_old+1);
wherein TSDF _ old is the TSDF value last calculated for the lattice, and wi _ old is the last weight value for the lattice;
iterating the TSDF value of each crystal lattice in the traversal crystal lattice, and judging that the crystal lattice is on the surface of the object to be detected when the TSDF value is 0 at the end of the traversal; if the TSDF value is positive, the crystal lattice is judged to be in the object to be detected; and if the TSDF value is negative, the crystal lattice is judged to be outside the object to be detected.
Further, the calculation method in step (5) is as follows: n × volume of each cubic lattice; where n is the total number of cubic lattices on and within the object.
A 3D depth camera based volumetric measurement system comprising:
the 3D depth camera is used for continuously shooting depth images of the top and the periphery of the object to be detected;
a reconstruction space dividing unit for determining a reconstruction space region and dividing the reconstruction space into n3A cubic lattice;
the ICP registration unit is used for calculating camera displacement parameters and rotation parameters between shooting poses of all the depth images;
the surface reconstruction unit is used for reconstructing the surface of the object by using TSDF according to the camera displacement parameter and the rotation parameter and judging whether each cubic lattice is positioned outside, inside or on the surface of the object;
and the volume calculation unit is used for calculating the volume of the object according to the total number of the cubic lattices on the surface and in the interior of the object.
Further, the ICP registration unit includes:
the target setting module is used for setting the current frame as a target depth map and the previous frame as an original depth map;
the point set generating module is used for calculating three-dimensional coordinates of all points in the original depth map and the target depth map to generate an original point set and a target point set;
the new point set generating module is used for searching the corresponding points closest to each point in the original point set from the target point set and forming a new point set by the corresponding points in the target point set;
the parameter calculation module is used for calculating camera displacement parameters and rotation parameters between the original point set and the new point set;
the transformation point set module is used for carrying out coordinate transformation on the original point set by using the camera displacement parameter and the rotation parameter to obtain a transformation point set;
the root mean square error module is used for calculating the root mean square error between the transformation point set and the new point set, and taking the camera displacement parameter and the rotation parameter as results when the root mean square error is smaller than a preset threshold value; otherwise, the transformation point set is used as an original point set, and the new point set is returned to the new point set generation module.
Further, the surface reconstruction unit comprises:
the coordinate conversion module is used for acquiring the coordinates of each crystal lattice in the world coordinate system and converting the coordinates into the coordinates in the camera coordinate system according to the camera displacement parameters and the rotation parameters; then converting the coordinates under the camera coordinate system into two-dimensional coordinates under the image coordinate system according to the camera internal parameters;
the lattice judgment module is used for traversing all lattices, and skipping TSDF calculation of the lattices when the depth value of a coordinate point of the lattice in the image coordinate system is 0; and when the depth value of the coordinate point of the crystal lattice in the image coordinate system is not 0, iteratively correcting the TSDF value of the crystal lattice during traversal, and judging that the crystal lattice is outside, inside or on the surface of the object to be detected according to the TSDF value after traversal is finished.
Has the advantages that: the method can measure the object volume by shooting the depth image by using the 3D depth camera, has small dependence on manpower and high speed, adopts ICP (inductively coupled plasma) registration and TSDF (Total differential diffusion) to reconstruct the object surface, has high calculation precision, is not limited by the shape of the object, and can be suitable for measuring the irregular object volume; and under different demand scenes, the user can adjust the number of the cubic lattices according to the actual use scene to control the precision, so that the method is convenient to popularize and can be widely applied to the field of object measurement in the express delivery industry and the like.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
Detailed Description
The present invention will be described in detail with reference to examples.
The present embodiment provides a volume measurement method and system based on a 3D depth camera, as shown in fig. 1, the method includes the following steps:
(1) acquiring depth images of the top and the periphery of an object to be detected; namely, the 3D depth camera is used for continuously shooting the depth images of the top and the periphery of the object to be detected, the object is fixed during shooting, the depth images are combined to see the full view of the surface of the object (the bottom of the object can not be included), and meanwhile, a color image can be shot, so that later-stage display and observation are facilitated;
(2) determining a reconstruction space region, and dividing the reconstruction space in the world coordinate system into n3A cubic lattice; the reconstructed space area is artificially determined according to the area range of the object, the size of the reconstructed space area can be obtained from the depth value of the depth map, and then the reconstructed space area is divided. For example, from the depth value, the 1m at which the object is located can be determined3The region is reconstruction space and is divided into 10243The length, width and height of each cubic lattice are 1/1024m, and the user can adjust the number of cubic lattices according to actual use scenes to control the precision.
(3) Calculating camera displacement parameters and rotation parameters between shooting poses of the depth image shot in the step (1), namely a camera displacement matrix T and a rotation matrix R, by using an ICP (inductively coupled plasma) registration method;
the method specifically comprises the following steps: according to the sequence of the images shot in the step (1), starting from the frame 2, each frame is compared with the previous frame, and the camera displacement parameter and the rotation parameter between the shooting poses of the two frames are calculated:
(3.1) setting the current frame as a target depth map and the previous frame as an original depth map;
(3.2) calculating three-dimensional coordinates of all points in the original depth map and the target depth map (for registration and fusion of point clouds), and generating an original point set p1 and a target point set p 2;
(3.3) for each point in the original point set, searching out the corresponding point closest to the point in the target point set, and forming a new point set Q by using the corresponding points in the target point set;
(3.4) calculating a camera displacement parameter T and a rotation parameter R between the original point set p1 and the new point set Q by adopting a minimum root mean square method;
(3.5) carrying out coordinate transformation on the original point set p1 by using the camera displacement parameter T and the rotation parameter R to obtain a transformation point set U; the method comprises the following steps: u — Rp1+ T, where U is the set of transformed points, p1 is the set of original points, T is the camera displacement parameters, R is the rotation parameters, T is the translation matrix of 3x1 and R is the rotation matrix of 3x3 in this embodiment.
(3.6) calculating the root mean square error between the transformation point set U and the new point set Q, and outputting the camera displacement parameters and the rotation parameters in the step (3.4) as results when the root mean square error is smaller than a preset threshold value, namely a preset limit value e; otherwise, the transformation point set is used as the original point set, i.e. p1 is replaced by the point set U, and the iteration is continued by returning to the step (3.3).
(4) Reconstructing the surface of the object by using TSDF according to the displacement parameter and the rotation parameter of the camera, and judging whether each cubic lattice is outside, inside or on the surface of the object; TSDF (truncated symbolic distance function) is a surface reconstruction algorithm that uses structured point cloud data and expresses a surface with parameters; the method is characterized in that point cloud data are mapped into a predefined three-dimensional space, a truncated symbolic distance function is used for representing an area near the surface of a real scene, and a surface model is built.
The method specifically comprises the following steps:
(4.1) traversing all crystal lattices in each frame of image, in the embodiment, scanning and traversing from front to back according to the distance from the camera, and acquiring coordinates Vg (x, y, z) of each crystal lattice in a world coordinate system for the crystal lattice g under each (x, y) coordinate, in the embodiment, the coordinate of the center point of each crystal lattice is taken as the coordinate of the crystal lattice, in addition, a certain vertex of each crystal lattice can be taken as the coordinate of the crystal lattice in a unified manner, and the coordinate is converted into the coordinate V (x, y, z) under the camera coordinate system according to the camera displacement parameter and the rotation parameter obtained in the step (3);
(4.2) converting the coordinates in the camera coordinate system into two-dimensional coordinates p (u, v) in the image coordinate system according to the camera internal parameters;
(4.3) when the depth value of the coordinate point of the crystal lattice in the image coordinate system is 0, skipping the TSDF calculation of the crystal lattice (because the corresponding depth value is 0, the corresponding pixel point is invalid);
when the depth value of a coordinate point of the crystal lattice in the image coordinate system is not 0, iteratively correcting the existing TSDF value of the crystal lattice during traversal, and judging that the crystal lattice is positioned outside, inside or on the surface of the object to be detected according to the TSDF value after the traversal is finished, wherein the method comprises the following steps:
the initial value of TSDF is set to: the difference value between the distance from the coordinates of the center point of the crystal lattice to the origin ti of the camera in the frame under the world coordinate system and the depth value D (p) of the coordinate point of the current crystal lattice under the image coordinate system;
setting the weighted value of the current calculation of the crystal lattice; specifically, selecting a weight wi of the TSDF of the calculated value of the lattice at this time, and updating the weight wi +1 of the lattice; this value is reserved for the next calculation;
the new TSDF value for this lattice was calculated weighted:
TSDF_new=(TSDF+TSDF_old*wi_old)/(wi_old+1);
wherein TSDF _ old is the TSDF value last calculated for the lattice, and wi _ old is the last weight value for the lattice;
iterating the TSDF value of each crystal lattice in the traversal crystal lattice, and judging that the crystal lattice is on the surface of the object to be detected when the TSDF value is 0 at the end of the traversal; if the TSDF value is positive, the crystal lattice is judged to be behind the measured surface, namely in the object to be measured; the TSDF is assigned to [0,1], and the closer to the observation surface, the closer to the value is to 0;
if the TSDF value is negative, the crystal lattice is judged to be in front of the measured surface, namely outside the object to be measured. Between TSDF assignments [ -1,0], the local values closer to the observation plane are closer to 0.
(5) The volume of the object is calculated from the total number of cubic lattices on the surface of the object and inside the object. The calculation method comprises the following steps: n × volume of each cubic lattice; where n is the total number of cubic lattices on the surface of the object and inside the object, and the volume of each cubic lattice is determined when the reconstruction space is divided in step (2).
The volume measurement system based on the 3D depth camera formed by the method has the following internal working methods of the units and modules, namely the volume measurement method based on the 3D depth camera, and the system comprises the following steps:
the 3D depth camera is used for continuously shooting depth images of the top and the periphery of the object to be detected;
a reconstruction space dividing unit for determining a reconstruction space region and dividing the reconstruction space into n3A cubic lattice;
the ICP registration unit is used for calculating camera displacement parameters and rotation parameters between shooting poses of all the depth images;
the surface reconstruction unit is used for reconstructing the surface of the object by using TSDF according to the camera displacement parameter and the rotation parameter and judging whether each cubic lattice is positioned outside, inside or on the surface of the object;
and the volume calculation unit is used for calculating the volume of the object according to the total number of the cubic lattices on the surface and in the interior of the object.
The ICP registration unit comprises a target setting module, a point set generation module, a new point set generation module, a parameter calculation module, a transformation point set module and a root-mean-square error module:
the target setting module is used for setting the current frame as a target depth map and the previous frame as an original depth map;
the point set generating module is used for calculating three-dimensional coordinates of all points in the original depth map and the target depth map to generate an original point set and a target point set;
the new point set generating module is used for searching the corresponding points closest to each point in the original point set from the target point set and forming a new point set by the corresponding points in the target point set;
the parameter calculation module is used for calculating camera displacement parameters and rotation parameters between the original point set and the new point set;
the transformation point set module is used for carrying out coordinate transformation on the original point set by using the camera displacement parameter and the rotation parameter to obtain a transformation point set;
the root mean square error module is used for calculating the root mean square error between the transformation point set and the new point set, and taking the camera displacement parameter and the rotation parameter as results when the root mean square error is smaller than a preset threshold value; otherwise, the transformation point set is used as an original point set, and the new point set is returned to the new point set generation module.
The surface reconstruction unit comprises a coordinate conversion module and a lattice judgment module:
the coordinate conversion module is used for acquiring the coordinates of each crystal lattice in the world coordinate system and converting the coordinates into the coordinates in the camera coordinate system according to the camera displacement parameters and the rotation parameters; then converting the coordinates under the camera coordinate system into two-dimensional coordinates under the image coordinate system according to the camera internal parameters;
the lattice judgment module is used for traversing all lattices, and skipping TSDF calculation of the lattices when the depth value of a coordinate point of the lattice in the image coordinate system is 0; and when the depth value of the coordinate point of the crystal lattice in the image coordinate system is not 0, iteratively correcting the TSDF value of the crystal lattice during traversal, and judging that the crystal lattice is outside, inside or on the surface of the object to be detected according to the TSDF value after traversal is finished.
The above is only a preferred embodiment of the present invention, and it will be apparent to those skilled in the art that several modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should be considered as the protection scope of the present invention.

Claims (10)

1. A volume measurement method based on a 3D depth camera is characterized by comprising the following steps:
(1) acquiring depth images of the top and the periphery of an object to be detected;
(2) determining a reconstruction space region, dividing the reconstruction space into n3A cubic lattice;
(3) calculating a camera displacement parameter and a rotation parameter between shooting poses of the depth image shot in the step (1) by using an ICP (inductively coupled plasma) registration method;
(4) reconstructing the surface of the object by using TSDF according to the displacement parameter and the rotation parameter of the camera, and judging whether each cubic lattice is outside, inside or on the surface of the object;
(5) the volume of the object is calculated from the total number of cubic lattices on the surface of the object and inside the object.
2. The method according to claim 1, characterized in that the step (3) is in particular: according to the sequence of the images shot in the step (1), starting from the frame 2, each frame is compared with the previous frame, and the camera displacement parameter and the rotation parameter between the shooting poses of the two frames are calculated:
(3.1) setting the current frame as a target depth map and the previous frame as an original depth map;
(3.2) calculating three-dimensional coordinates of all points in the original depth map and the target depth map to generate an original point set and a target point set;
(3.3) for each point in the original point set, searching out the corresponding point closest to the point in the target point set, and combining the corresponding points in the target point set into a new point set;
(3.4) calculating a camera displacement parameter and a rotation parameter between the original point set and the new point set;
(3.5) for the original point set, carrying out coordinate transformation by using the camera displacement parameter and the rotation parameter to obtain a transformation point set;
(3.6) calculating the root mean square error between the transformation point set and the new point set, and taking the camera displacement parameter and the rotation parameter in the step (3.4) as results when the root mean square error is smaller than a preset threshold value; otherwise, the transformed point set is used as the original point set, and the step (3.3) is returned.
3. The method according to claim 2, characterized in that step (3.4) is to calculate the camera displacement parameters and rotation parameters between the original point set and the new point set by using a least mean square method.
4. The method of claim 2, wherein the set of transform points obtained in step (3.5) is: U-Rp 1+ T, where U is the set of transformed points, p1 is the set of original points, T is the camera displacement parameters, and R is the rotation parameters.
5. The method according to claim 1, wherein the step (4) adopts the following method:
(4.1) traversing all lattices in each frame of image, acquiring coordinates of each lattice in a world coordinate system, and converting the coordinates into coordinates in a camera coordinate system according to the camera displacement parameters and the rotation parameters obtained in the step (3);
(4.2) converting the coordinates under the camera coordinate system into two-dimensional coordinates under the image coordinate system according to the camera internal parameters;
(4.3) skipping TSDF calculation of the crystal lattice when the depth value of the coordinate point of the crystal lattice under the image coordinate system is 0;
and when the depth value of the coordinate point of the crystal lattice in the image coordinate system is not 0, iteratively correcting the TSDF value of the crystal lattice during traversal, and judging that the crystal lattice is outside, inside or on the surface of the object to be detected according to the TSDF value after traversal is finished.
6. The method of claim 5, wherein the step (4.3) of iteratively correcting the TSDF value of the lattice during the traversal and determining the lattice on the external, internal or surface of the object to be tested according to the TSDF value at the end of the traversal comprises:
the initial value of TSDF is set to: the difference value between the distance from the coordinates of the center point of the crystal lattice under the world coordinate system to the origin of the camera of the frame and the depth value of the coordinate point of the current crystal lattice under the image coordinate system;
setting the weighted value of the current calculation of the crystal lattice;
the new TSDF value for this lattice was calculated weighted:
TSDF_new=(TSDF+TSDF_old*wi_old)/(wi_old+1);
wherein TSDF _ old is the TSDF value last calculated for the lattice, and wi _ old is the last weight value for the lattice;
iterating the TSDF value of each crystal lattice in the traversal crystal lattice, and judging that the crystal lattice is on the surface of the object to be detected when the TSDF value is 0 at the end of the traversal; if the TSDF value is positive, the crystal lattice is judged to be in the object to be detected; and if the TSDF value is negative, the crystal lattice is judged to be outside the object to be detected.
7. The method according to claim 1, wherein the calculation method in step (5) is: n × volume of each cubic lattice; where n is the total number of cubic lattices on and within the object.
8. A 3D depth camera based volumetric measurement system comprising:
the 3D depth camera is used for continuously shooting depth images of the top and the periphery of the object to be detected;
a reconstruction space dividing unit for determining a reconstruction space region and dividing the reconstruction space into n3A cubic lattice;
the ICP registration unit is used for calculating camera displacement parameters and rotation parameters between shooting poses of all the depth images;
the surface reconstruction unit is used for reconstructing the surface of the object by using TSDF according to the camera displacement parameter and the rotation parameter and judging whether each cubic lattice is positioned outside, inside or on the surface of the object;
and the volume calculation unit is used for calculating the volume of the object according to the total number of the cubic lattices on the surface and in the interior of the object.
9. The system according to claim 8, wherein the ICP registration unit includes:
the target setting module is used for setting the current frame as a target depth map and the previous frame as an original depth map;
the point set generating module is used for calculating three-dimensional coordinates of all points in the original depth map and the target depth map to generate an original point set and a target point set;
the new point set generating module is used for searching the corresponding points closest to each point in the original point set from the target point set and forming a new point set by the corresponding points in the target point set;
the parameter calculation module is used for calculating camera displacement parameters and rotation parameters between the original point set and the new point set;
the transformation point set module is used for carrying out coordinate transformation on the original point set by using the camera displacement parameter and the rotation parameter to obtain a transformation point set;
the root mean square error module is used for calculating the root mean square error between the transformation point set and the new point set, and taking the camera displacement parameter and the rotation parameter as results when the root mean square error is smaller than a preset threshold value; otherwise, the transformation point set is used as an original point set, and the new point set is returned to the new point set generation module.
10. The system according to claim 8, wherein the surface reconstruction unit comprises:
the coordinate conversion module is used for acquiring the coordinates of each crystal lattice in the world coordinate system and converting the coordinates into the coordinates in the camera coordinate system according to the camera displacement parameters and the rotation parameters; then converting the coordinates under the camera coordinate system into two-dimensional coordinates under the image coordinate system according to the camera internal parameters;
the lattice judgment module is used for traversing all lattices, and skipping TSDF calculation of the lattices when the depth value of a coordinate point of the lattice in the image coordinate system is 0; and when the depth value of the coordinate point of the crystal lattice in the image coordinate system is not 0, iteratively correcting the TSDF value of the crystal lattice during traversal, and judging that the crystal lattice is outside, inside or on the surface of the object to be detected according to the TSDF value after traversal is finished.
CN202010290597.3A 2019-11-29 2020-04-14 Volume measurement method and system based on 3D depth camera Pending CN111292326A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911196241 2019-11-29
CN2019111962417 2019-11-29

Publications (1)

Publication Number Publication Date
CN111292326A true CN111292326A (en) 2020-06-16

Family

ID=71029540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010290597.3A Pending CN111292326A (en) 2019-11-29 2020-04-14 Volume measurement method and system based on 3D depth camera

Country Status (1)

Country Link
CN (1) CN111292326A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292925A (en) * 2017-06-06 2017-10-24 哈尔滨工业大学深圳研究生院 Based on Kinect depth camera measuring methods
CN107833270A (en) * 2017-09-28 2018-03-23 浙江大学 Real-time object dimensional method for reconstructing based on depth camera
CN108053476A (en) * 2017-11-22 2018-05-18 上海大学 A kind of human parameters measuring system and method rebuild based on segmented three-dimensional

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292925A (en) * 2017-06-06 2017-10-24 哈尔滨工业大学深圳研究生院 Based on Kinect depth camera measuring methods
CN107833270A (en) * 2017-09-28 2018-03-23 浙江大学 Real-time object dimensional method for reconstructing based on depth camera
CN108053476A (en) * 2017-11-22 2018-05-18 上海大学 A kind of human parameters measuring system and method rebuild based on segmented three-dimensional

Similar Documents

Publication Publication Date Title
CN110363858B (en) Three-dimensional face reconstruction method and system
Alidoost et al. Comparison of UAS-based photogrammetry software for 3D point cloud generation: a survey over a historical site
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN109242862B (en) Real-time digital surface model generation method
CN110866531A (en) Building feature extraction method and system based on three-dimensional modeling and storage medium
CN111899328B (en) Point cloud three-dimensional reconstruction method based on RGB data and generation countermeasure network
CN106709947A (en) RGBD camera-based three-dimensional human body rapid modeling system
KR102415768B1 (en) A color image generating device by feature ground height and a color image generating program by feature height
CN103456038A (en) Method for rebuilding three-dimensional scene of downhole environment
JP6534296B2 (en) Three-dimensional model generation device, three-dimensional model generation method, and program
CN112307553B (en) Method for extracting and simplifying three-dimensional road model
CN109147025B (en) RGBD three-dimensional reconstruction-oriented texture generation method
CN110738618A (en) irregular windrow volume measurement method based on binocular camera
CN105631939B (en) A kind of three-dimensional point cloud distortion correction method and its system based on curvature filtering
CN114419130A (en) Bulk cargo volume measurement method based on image characteristics and three-dimensional point cloud technology
CN107796370B (en) Method and device for acquiring conversion parameters and mobile mapping system
JP2016217941A (en) Three-dimensional evaluation device, three-dimensional data measurement system and three-dimensional measurement method
CN106097433A (en) Object industry and the stacking method of Image model and system
CN108182722A (en) A kind of true orthophoto generation method of three-dimension object edge optimization
JP2003323640A (en) Method, system and program for preparing highly precise city model using laser scanner data and aerial photographic image
CN109461197B (en) Cloud real-time drawing optimization method based on spherical UV and re-projection
CN113034571A (en) Object three-dimensional size measuring method based on vision-inertia
CN114998545A (en) Three-dimensional modeling shadow recognition system based on deep learning
KR101021013B1 (en) A system for generating 3-dimensional geographical information using intensive filtering an edge of building object and digital elevation value
CN113670316A (en) Path planning method and system based on double radars, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200616

WD01 Invention patent application deemed withdrawn after publication