CN112070883A - Three-dimensional reconstruction method for 3D printing process based on machine vision - Google Patents

Three-dimensional reconstruction method for 3D printing process based on machine vision Download PDF

Info

Publication number
CN112070883A
CN112070883A CN202010885779.5A CN202010885779A CN112070883A CN 112070883 A CN112070883 A CN 112070883A CN 202010885779 A CN202010885779 A CN 202010885779A CN 112070883 A CN112070883 A CN 112070883A
Authority
CN
China
Prior art keywords
dimensional
module
reconstruction
image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010885779.5A
Other languages
Chinese (zh)
Inventor
王成玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202010885779.5A priority Critical patent/CN112070883A/en
Publication of CN112070883A publication Critical patent/CN112070883A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Materials Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a three-dimensional reconstruction method for a 3D printing process based on machine vision. The invention relates to the technical field of 3D printing, which is characterized in that an image acquisition module is used for acquiring images through a camera, a camera calibration module is used for realizing multi-point calibration, the camera calibration module is connected with a feature extraction module, the feature extraction module is used for extracting image features, the feature extraction module is connected with a point cloud reconstruction module, and the point cloud reconstruction module is connected with a surface reconstruction module; the invention is convenient for realizing the rapid acquisition and extraction of images, can realize the point cloud and surface reconstruction and is convenient to use; the accuracy can be improved, the operation is rapid and stable, and the time is saved during the operation.

Description

Three-dimensional reconstruction method for 3D printing process based on machine vision
Technical Field
The invention relates to the technical field of 3D printing, in particular to a three-dimensional reconstruction method for a 3D printing process based on machine vision.
Background
3D printing (3DP), one of the rapid prototyping technologies, is a technology that constructs an object by printing layer by layer using an adhesive material such as powdered metal or plastic based on a digital model file. 3D printing is typically achieved using digital technology material printers. The method is often used for manufacturing models in the fields of mold manufacturing, industrial design and the like, and is gradually used for directly manufacturing some products, and parts printed by the technology are already available. The technology has applications in jewelry, footwear, industrial design, construction, engineering and construction (AEC), automotive, aerospace, dental and medical industries, education, geographic information systems, civil engineering, firearms, and other fields.
The existing three-dimensional reconstruction system for the 3D printing process has the defects of error of printed articles, complex operation and long reconstruction time due to the accuracy during reconstruction.
Disclosure of Invention
The invention provides a three-dimensional reconstruction method for a 3D printing process based on machine vision, which aims to solve the problems that an existing three-dimensional reconstruction system for the 3D printing process has errors, complex operation and long reconstruction time due to the fact that the accuracy is high when the existing three-dimensional reconstruction system for the 3D printing process carries out reconstruction, and the technical scheme is as follows:
a three-dimensional reconstruction method of a 3D printing process based on machine vision is based on a three-dimensional reconstruction system of the 3D printing process based on machine vision, and the system comprises an image acquisition module, a camera calibration module, a feature extraction module, a point cloud reconstruction module and a surface reconstruction module;
the image acquisition module is connected with the camera calibration module, the camera calibration module is connected with the feature extraction module, the feature extraction module is connected with the point cloud reconstruction module, and the point cloud reconstruction module is connected with the surface reconstruction module, and the method comprises the following steps:
step 1: acquiring a two-dimensional image of a three-dimensional object through an image acquisition module;
step 2: establishing an effective imaging model through a camera calibration module according to the obtained two-dimensional image, determining internal and external parameters of a camera, and obtaining three-dimensional point coordinates in a space;
and step 3: extracting feature points through a feature extraction module according to the obtained three-dimensional point coordinates, and taking the feature points as matching elements;
and 4, step 4: extracting basic geometric information in three-dimensional reconstruction by using a motion recovery structure sfm algorithm, namely acquiring the three-dimensional position of a seen object and the pose of a camera from a two-dimensional image sequence and performing dense reconstruction;
and 5: and performing surface reconstruction on the three-dimensional dense point cloud by adopting Poisson surface reconstruction, thereby generating a model.
Preferably, the step 3 specifically comprises:
step 3.1: the method comprises the steps of establishing a scale space, namely reducing noise points in an image by adopting a Gaussian blur method, identifying the most vivid characteristic in a given image, removing the noise points in the image by adopting the Gaussian blur method, emphasizing the important characteristic of the image, establishing the scale space according to different scale coefficients, searching image positions on all scales, and identifying potential interest points which are invariable in scale and rotation by adopting a Gaussian differential function;
step 3.2: positioning key points, and enhancing features by using Gaussian difference, wherein the Gaussian difference is obtained by subtracting one blurred version of an original image from the other blurred version of the original image to construct a Gaussian difference pyramid, and an extreme point is found, so that the image features are enhanced;
step 3.3: determining the direction of a key point, and counting the characteristic points in the gradient direction in the characteristic point field so as to obtain the direction of the characteristic points;
step 3.4: and (3) describing the characteristics of the key points, wherein each point has position scale direction information through the steps 3.1-3.3, and establishing a descriptor for each point to ensure that each point has invariance.
Preferably, the step 4 specifically includes:
step 4.1: calibrating a camera, solving an internal reference matrix, extracting feature points and performing feature matching on each 2 pictures;
step 4.2: searching a group with the most characteristic points as an initial picture, and solving an essential matrix or a basic matrix according to the antipodal geometry, wherein the essential matrix refers to the relation of different cameras in a camera coordinate system, and the basic matrix refers to a system in an image coordinate system;
step 4.3: triangularizing the feature points matched with the 2 pictures, continuously adding new pictures, and performing 3D-2D matching through the feature points matched with the new pictures;
step 4.4: and according to a light beam adjustment method, carrying out nonlinear optimization on the three-dimensional coordinates of all camera poses and objects in the space, so that the solved information error is minimum.
Preferably, the image acquisition module includes binocular camera, light filling lamp, support column and rotating electrical machines, the binocular camera is installed in the upper end of support column, and the light filling lamp is installed in the two camera outsides of binocular camera, and the bottom of support column is connected with rotating electrical machines's hub connection.
Preferably, the system further comprises a reinforcing strip, and the outer side wall of the supporting column is provided with the reinforcing strip.
The invention has the following beneficial effects:
the invention is convenient for realizing the rapid acquisition and extraction of images, can realize the point cloud and surface reconstruction and is convenient to use; the accuracy can be improved, the operation is rapid and stable, and the time is saved during the operation.
Drawings
FIG. 1 is a three-dimensional reconstruction structure diagram of a 3D printing process based on machine vision;
FIG. 2 is a schematic structural diagram of an image acquisition module;
fig. 3 is a schematic structural diagram of the support column.
Detailed Description
The present invention will be described in detail with reference to specific examples.
The first embodiment is as follows:
the invention provides a three-dimensional reconstruction method for a 3D printing process based on machine vision, which comprises the following steps:
step 1: acquiring a two-dimensional image of a three-dimensional object through an image acquisition module;
step 2: establishing an effective imaging model through a camera calibration module according to the obtained two-dimensional image, determining internal and external parameters of a camera, and obtaining three-dimensional point coordinates in a space;
and step 3: extracting feature points through a feature extraction module according to the obtained three-dimensional point coordinates, and taking the feature points as matching elements;
the step 3 specifically comprises the following steps:
step 3.1: the method comprises the steps of establishing a scale space, namely reducing noise points in an image by adopting a Gaussian blur method, identifying the most vivid characteristic in a given image, removing the noise points in the image by adopting the Gaussian blur method, emphasizing the important characteristic of the image, establishing the scale space according to different scale coefficients, searching image positions on all scales, and identifying potential interest points which are invariable in scale and rotation by adopting a Gaussian differential function;
step 3.2: positioning key points, and enhancing features by using Gaussian difference, wherein the Gaussian difference is obtained by subtracting one blurred version of an original image from the other blurred version of the original image to construct a Gaussian difference pyramid, and an extreme point is found, so that the image features are enhanced;
step 3.3: determining the direction of a key point, and counting the characteristic points in the gradient direction in the characteristic point field so as to obtain the direction of the characteristic points;
step 3.4: and (3) describing the characteristics of the key points, wherein each point has position scale direction information through the steps 3.1-3.3, and establishing a descriptor for each point to ensure that each point has invariance.
And 4, step 4: extracting basic geometric information in three-dimensional reconstruction by using a motion recovery structure sfm algorithm, namely acquiring the three-dimensional position of a seen object and the pose of a camera from a two-dimensional image sequence and performing dense reconstruction;
the step 4 specifically comprises the following steps:
step 4.1: calibrating a camera, solving an internal reference matrix, extracting feature points and performing feature matching on each 2 pictures;
step 4.2: searching a group with the most characteristic points as an initial picture, and solving an essential matrix or a basic matrix according to the antipodal geometry, wherein the essential matrix refers to the relation of different cameras in a camera coordinate system, and the basic matrix refers to a system in an image coordinate system;
step 4.3: triangularizing the feature points matched with the 2 pictures, continuously adding new pictures, and performing 3D-2D matching through the feature points matched with the new pictures;
step 4.4: and according to a light beam adjustment method, carrying out nonlinear optimization on the three-dimensional coordinates of all camera poses and objects in the space, so that the solved information error is minimum.
And 5: and performing surface reconstruction on the three-dimensional dense point cloud by adopting Poisson surface reconstruction, thereby generating a model.
According to the figure 1, the method of the invention is based on a three-dimensional reconstruction system of a 3D printing process based on machine vision, and the system comprises an image acquisition module, a camera calibration module, a feature extraction module, a point cloud reconstruction module and a surface reconstruction module;
the image acquisition module is connected with the camera calibration module, the camera calibration module is connected with the feature extraction module, the feature extraction module is connected with the point cloud reconstruction module, and the point cloud reconstruction module is connected with the surface reconstruction module.
According to fig. 2 and 3, the image acquisition module comprises a binocular camera 1, a light supplement lamp 2, a support column 3 and a rotating motor 4, the binocular camera 1 is installed at the upper end of the support column 3, the light supplement lamp 2 is installed outside the two cameras of the binocular camera 1, and the bottom of the support column 3 is connected with a shaft of the rotating motor 4.
The system also comprises a reinforcing strip, and a reinforcing strip 31 is arranged on the outer side wall of the supporting column 3.
The camera calibration module is used for completing calibration of the camera by solving a projection matrix through multiple points and correcting distortion by using the obtained parameters.
The point cloud reconstruction module reconstructs the point cloud by adopting a motion recovery structure, reconstructs the multi-view stereoscopic vision dense reconstruction and finally constructs a grid from the point cloud by using a surface reconstruction method.
And the point cloud reconstruction module is used for reconstructing the sparse point cloud through an SIFT algorithm and an SFM algorithm.
Research on a camera projection model: the camera imaging model is determined by the relation between each point on the image and the corresponding point in the real space, the camera calibration is completed through multiple points, the internal and external parameters of the camera are determined, and the model is established.
Study of three-dimensional reconstruction; the method comprises the following steps: the method comprises the steps of image acquisition, camera calibration, feature extraction, point cloud reconstruction, surface reconstruction and three-dimensional model design reconstruction.
Aiming at the real-time requirement, a reconstruction improvement algorithm is provided, the feature extraction speed is increased, and the point cloud reconstruction process is accelerated.
The camera projection model comprises the steps of solving a projection matrix by multiple points to finish the calibration of the camera, and correcting distortion by using the obtained parameters.
And aiming at application characteristics, completing three-dimensional reconstruction process design, wherein a motion recovery structure is adopted to reconstruct point cloud, multi-view stereoscopic vision dense reconstruction is adopted, and finally a surface reconstruction method is used to construct a grid from the point cloud.
The above description is only a preferred embodiment of the three-dimensional reconstruction method for the 3D printing process based on the machine vision, and the protection scope of the three-dimensional reconstruction method for the 3D printing process based on the machine vision is not limited to the above embodiments, and all technical solutions belonging to the idea belong to the protection scope of the present invention. It should be noted that modifications and variations which do not depart from the gist of the invention will be those skilled in the art to which the invention pertains and which are intended to be within the scope of the invention.

Claims (5)

1. A three-dimensional reconstruction method of a 3D printing process based on machine vision is based on a three-dimensional reconstruction system of the 3D printing process based on machine vision, and the system comprises an image acquisition module, a camera calibration module, a feature extraction module, a point cloud reconstruction module and a surface reconstruction module; the image acquisition module is connected with the camera calibration module, the camera calibration module is connected with the feature extraction module, the feature extraction module is connected with the point cloud reconstruction module, and the point cloud reconstruction module is connected with the surface reconstruction module, and the device is characterized in that: the method comprises the following steps:
step 1: acquiring a two-dimensional image of a three-dimensional object through an image acquisition module;
step 2: establishing an effective imaging model through a camera calibration module according to the obtained two-dimensional image, determining internal and external parameters of a camera, and obtaining three-dimensional point coordinates in a space;
and step 3: extracting feature points through a feature extraction module according to the obtained three-dimensional point coordinates, and taking the feature points as matching elements;
and 4, step 4: extracting basic geometric information in three-dimensional reconstruction by using a motion recovery structure sfm algorithm, namely acquiring the three-dimensional position of a seen object and the pose of a camera from a two-dimensional image sequence and performing dense reconstruction;
and 5: and performing surface reconstruction on the three-dimensional dense point cloud by adopting Poisson surface reconstruction, thereby generating a model.
2. The 3D printing process three-dimensional reconstruction method based on machine vision as claimed in claim 1, wherein: the step 3 specifically comprises the following steps:
step 3.1: the method comprises the steps of establishing a scale space, namely reducing noise points in an image by adopting a Gaussian blur method, identifying the most vivid characteristic in a given image, removing the noise points in the image by adopting the Gaussian blur method, emphasizing the important characteristic of the image, establishing the scale space according to different scale coefficients, searching image positions on all scales, and identifying potential interest points which are invariable in scale and rotation by adopting a Gaussian differential function;
step 3.2: positioning key points, and enhancing features by using Gaussian difference, wherein the Gaussian difference is obtained by subtracting one blurred version of an original image from the other blurred version of the original image to construct a Gaussian difference pyramid, and an extreme point is found, so that the image features are enhanced;
step 3.3: determining the direction of a key point, and counting the characteristic points in the gradient direction in the characteristic point field so as to obtain the direction of the characteristic points;
step 3.4: and (3) describing the characteristics of the key points, wherein each point has position scale direction information through the steps 3.1-3.3, and establishing a descriptor for each point to ensure that each point has invariance.
3. The 3D printing process three-dimensional reconstruction method based on machine vision as claimed in claim 1, wherein: the step 4 specifically comprises the following steps:
step 4.1: calibrating a camera, solving an internal reference matrix, extracting feature points and performing feature matching on each 2 pictures;
step 4.2: searching a group with the most characteristic points as an initial picture, and solving an essential matrix or a basic matrix according to the antipodal geometry, wherein the essential matrix refers to the relation of different cameras in a camera coordinate system, and the basic matrix refers to a system in an image coordinate system;
step 4.3: triangularizing the feature points matched with the 2 pictures, continuously adding new pictures, and performing 3D-2D matching through the feature points matched with the new pictures;
step 4.4: and according to a light beam adjustment method, carrying out nonlinear optimization on the three-dimensional coordinates of all camera poses and objects in the space, so that the solved information error is minimum.
4. The 3D printing process three-dimensional reconstruction method based on machine vision as claimed in claim 1, wherein: the image acquisition module includes two mesh cameras, light filling lamp, support column and rotating electrical machines, two mesh cameras are installed in the upper end of support column, and the light filling lamp is installed in two camera outsides of two mesh cameras, and the bottom of support column is connected with the hub connection of rotating electrical machines.
5. The 3D printing process three-dimensional reconstruction method based on machine vision as claimed in claim 1, wherein: the system also comprises a reinforcing strip, wherein the reinforcing strip is arranged on the outer side wall of the supporting column.
CN202010885779.5A 2020-08-28 2020-08-28 Three-dimensional reconstruction method for 3D printing process based on machine vision Pending CN112070883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010885779.5A CN112070883A (en) 2020-08-28 2020-08-28 Three-dimensional reconstruction method for 3D printing process based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010885779.5A CN112070883A (en) 2020-08-28 2020-08-28 Three-dimensional reconstruction method for 3D printing process based on machine vision

Publications (1)

Publication Number Publication Date
CN112070883A true CN112070883A (en) 2020-12-11

Family

ID=73659614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010885779.5A Pending CN112070883A (en) 2020-08-28 2020-08-28 Three-dimensional reconstruction method for 3D printing process based on machine vision

Country Status (1)

Country Link
CN (1) CN112070883A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113601833A (en) * 2021-08-04 2021-11-05 温州科技职业学院 FDM three-dimensional printing control system
CN113721866A (en) * 2021-08-19 2021-11-30 东莞中国科学院云计算产业技术创新与育成中心 Data acquisition system and method applied to 3D printing
CN114161713A (en) * 2021-10-26 2022-03-11 深圳市纵维立方科技有限公司 Printing head, detection method, storage medium and three-dimensional printer
CN115139535A (en) * 2022-07-11 2022-10-04 河北工业大学 3D printer inverse feedback detection method and system based on three-dimensional reconstruction technology
CN114161713B (en) * 2021-10-26 2024-06-04 深圳市纵维立方科技有限公司 Printing head, detection method, storage medium and three-dimensional printer

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376552A (en) * 2014-09-19 2015-02-25 四川大学 Virtual-real registering algorithm of 3D model and two-dimensional image
CN105184863A (en) * 2015-07-23 2015-12-23 同济大学 Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method
CN108537879A (en) * 2018-03-29 2018-09-14 东华智业(北京)科技发展有限公司 Reconstructing three-dimensional model system and method
CN109816724A (en) * 2018-12-04 2019-05-28 中国科学院自动化研究所 Three-dimensional feature extracting method and device based on machine vision
CN109910294A (en) * 2019-03-28 2019-06-21 哈尔滨理工大学 A kind of 3D printing formed precision detection method based on machine vision
CN110211223A (en) * 2019-05-28 2019-09-06 哈工大新材料智能装备技术研究院(招远)有限公司 A kind of increment type multiview three-dimensional method for reconstructing
CN110782521A (en) * 2019-09-06 2020-02-11 重庆东渝中能实业有限公司 Mobile terminal three-dimensional reconstruction and model restoration method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376552A (en) * 2014-09-19 2015-02-25 四川大学 Virtual-real registering algorithm of 3D model and two-dimensional image
CN105184863A (en) * 2015-07-23 2015-12-23 同济大学 Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method
CN108537879A (en) * 2018-03-29 2018-09-14 东华智业(北京)科技发展有限公司 Reconstructing three-dimensional model system and method
CN109816724A (en) * 2018-12-04 2019-05-28 中国科学院自动化研究所 Three-dimensional feature extracting method and device based on machine vision
CN109910294A (en) * 2019-03-28 2019-06-21 哈尔滨理工大学 A kind of 3D printing formed precision detection method based on machine vision
CN110211223A (en) * 2019-05-28 2019-09-06 哈工大新材料智能装备技术研究院(招远)有限公司 A kind of increment type multiview three-dimensional method for reconstructing
CN110782521A (en) * 2019-09-06 2020-02-11 重庆东渝中能实业有限公司 Mobile terminal three-dimensional reconstruction and model restoration method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨铎 等: "基于单目视觉的叶轮零件三维重构方法研究", 《机电工程》, pages 74 - 75 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113601833A (en) * 2021-08-04 2021-11-05 温州科技职业学院 FDM three-dimensional printing control system
CN113721866A (en) * 2021-08-19 2021-11-30 东莞中国科学院云计算产业技术创新与育成中心 Data acquisition system and method applied to 3D printing
CN114161713A (en) * 2021-10-26 2022-03-11 深圳市纵维立方科技有限公司 Printing head, detection method, storage medium and three-dimensional printer
CN114161713B (en) * 2021-10-26 2024-06-04 深圳市纵维立方科技有限公司 Printing head, detection method, storage medium and three-dimensional printer
CN115139535A (en) * 2022-07-11 2022-10-04 河北工业大学 3D printer inverse feedback detection method and system based on three-dimensional reconstruction technology
CN115139535B (en) * 2022-07-11 2023-05-26 河北工业大学 Three-dimensional reconstruction technology-based 3D printer inverse feedback detection method and system

Similar Documents

Publication Publication Date Title
CN112070883A (en) Three-dimensional reconstruction method for 3D printing process based on machine vision
CN110363858B (en) Three-dimensional face reconstruction method and system
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN104240289B (en) Three-dimensional digitalization reconstruction method and system based on single camera
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN110176032B (en) Three-dimensional reconstruction method and device
US11290704B2 (en) Three dimensional scanning system and framework
KR20180067908A (en) Apparatus for restoring 3d-model and method for using the same
CN109373912B (en) Binocular vision-based non-contact six-degree-of-freedom displacement measurement method
CN112200203B (en) Matching method of weak correlation speckle images in oblique field of view
CN110517209B (en) Data processing method, device, system and computer readable storage medium
CN101750029B (en) Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN110782521A (en) Mobile terminal three-dimensional reconstruction and model restoration method and system
CN107633533B (en) High-precision circular mark point center positioning method and device under large-distortion lens
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN107862733B (en) Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm
JP2021173740A (en) System and method for efficiently 3d re-constructing objects using telecentric line-scan cameras
CN110738731A (en) 3D reconstruction method and system for binocular vision
CN111968223A (en) Three-dimensional reconstruction system for 3D printing process based on machine vision
CN116168143A (en) Multi-view three-dimensional reconstruction method
CN117450955B (en) Three-dimensional measurement method for thin object based on space annular feature
CN107610215B (en) High-precision multi-angle oral cavity three-dimensional digital imaging model construction method
GB2569609A (en) Method and device for digital 3D reconstruction
CN109859313B (en) 3D point cloud data acquisition method and device, and 3D data generation method and system
Wu et al. A camera calibration method based on OpenCV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination