CN113781531A - Moving object point cloud model registration method - Google Patents

Moving object point cloud model registration method Download PDF

Info

Publication number
CN113781531A
CN113781531A CN202110848833.3A CN202110848833A CN113781531A CN 113781531 A CN113781531 A CN 113781531A CN 202110848833 A CN202110848833 A CN 202110848833A CN 113781531 A CN113781531 A CN 113781531A
Authority
CN
China
Prior art keywords
point cloud
registration
same
cloud model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110848833.3A
Other languages
Chinese (zh)
Inventor
邱鹏
李赛红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Yifangti Technology Co ltd
Original Assignee
Wuhan Yifangti Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Yifangti Technology Co ltd filed Critical Wuhan Yifangti Technology Co ltd
Priority to CN202110848833.3A priority Critical patent/CN113781531A/en
Publication of CN113781531A publication Critical patent/CN113781531A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of machine vision, in particular to a moving object point cloud model registration method, which is characterized by comprising the following steps: inputting a point cloud model obtained by scanning and reconstructing in the process of two movements of an object with the same size; respectively extracting a plurality of distinct and same local point clouds in two models of objects with the same size, and performing primary registration on the two object models based on the extracted same local point clouds; adjusting the object model based on the registration result to obtain a primary matching result; and respectively extracting the same parts of the two object model main bodies, and performing final registration based on the extracted same parts of the objects. The method is suitable for a relatively sparse object point cloud model with relatively small point cloud density, is suitable for two object point cloud models with relatively large difference, and can improve the matching speed.

Description

Moving object point cloud model registration method
Technical Field
The invention relates to the technical field of machine vision, in particular to a moving object point cloud model registration method.
Background
At present, a mainstream method for Point cloud model registration mainly includes a registration method based on feature points and an ICP (Iterative Closest Point algorithm) based method, and global overall registration is performed on a Point cloud model. At present, the mainstream method is generally applied to scenes with dense point clouds, small size and small difference between registered point clouds.
The application scene is the sparse point cloud registration of moving objects with large difference and large size. The point cloud registration refers to solving the parameter of coincidence between two similar point clouds after translation and rotation. In the field of 3D computer vision, point cloud models acquired and processed by different sensors are fused to perform three-dimensional reconstruction.
The prior art model registration method has the following disadvantages:
1. the method has low applicability to the point cloud model of the object with larger size and the precision is difficult to meet. Taking a large truck as an example, the size range of the truck point cloud can reach 2.5m x 3m x 20m, and a general point cloud registration algorithm is difficult to apply.
2. The method has low applicability to object point cloud models with low density and sparse point clouds, and the precision is difficult to meet. Some objects often have sparse point cloud density of the objects obtained by scanning and reconstructing the sensor due to large size, and the resolution of a scanning common large truck reaches a coordinate point interval of 5 cm.
3. The method has low applicability to two object point cloud models with large difference parts, and the precision is difficult to meet. The object may carry other redundant objects during the secondary scanning, so that the two scanning results have a large difference, and the general point cloud registration algorithm is difficult to apply.
4. The efficiency is low and the registration takes a long time. Based on a related point cloud registration algorithm built in a PCL (personal computer) library, the time consumption for carrying out ICP (inductively coupled plasma) registration under strict one-to-one condition is up to two minutes when the object point cloud with the scale of 6 ten thousand points is subjected to the ICP registration.
Disclosure of Invention
In order to solve the problems, the invention provides a moving object point cloud model registration method which is suitable for a sparse object point cloud model with small point cloud density and two object point cloud models with large difference parts and can improve the matching speed.
In order to solve the problems, the invention adopts the technical scheme that:
a method for registering a point cloud model of a moving object comprises the following steps,
step 1, inputting a point cloud model obtained by scanning and reconstructing in the process of two movements of an object with the same size;
step 2, respectively extracting a plurality of distinct and same local point clouds in two models of the object with the same size, and performing primary registration on the two object models based on the extracted same local point clouds;
step 3, adjusting the object model based on the registration result to obtain a primary matching result;
and 4, respectively extracting the same parts of the two object model bodies, and performing final registration based on the extracted same parts of the objects.
Preferably, in step 1, the input point cloud model registration object is a large-size sparse point cloud model with a certain degree of difference.
Preferably, in step 2, the matching of the input point cloud model takes the significantly same parts of the two object models as the first matching input reference, the parts can be extracted by methods such as the proportion of the whole structure, and the like, and then takes the first matching result as the reference, takes the main part with small object feature difference as the input reference of the second matching, the main part can be obtained based on the voxel difference size after octree segmentation, and the same part structure required by the second registration can be accurately obtained by combining with the conditional filtering of PCL.
Preferably, in step 2, the coordinates of the positions of the centroids of the structures of the significant same parts are calculated, the point cloud is adjusted by using the centroid as the origin of the coordinate system, and after the coordinates of the positions of the centroids of the two structures of the significant same parts are overlapped, the ICP registration is performed, so that the primary registration of the two object models is realized.
Preferably, in step 2, after the registration of the significantly identical partial structures is completed, a rigid transformation matrix of the two point clouds is obtained, the rigid transformation matrix describes parameters of one of the significantly identical partial structures, which is subjected to rotational translation in a three-dimensional space to obtain the position of the other significantly identical partial structure, and based on the rigid transformation matrix, the position of one of the object model point clouds is subjected to rigid transformation to the position of the other one of the object model point clouds.
Preferably, in step 4, extracting other local same part point clouds except for obviously same part structures, calculating and solving a point cloud normal vector, judging and removing part of the point clouds of which the normal vector is vertical to a specific plane according to the normal vector, filtering the processed part of the point clouds, performing voxel segmentation by using an octree, comparing each voxel of the two point clouds, retaining the same part of the point clouds, obtaining two similar point clouds subjected to primary coarse registration, wherein the point clouds have smaller difference and roughly overlapped spatial positions, and dynamically adjusting the size of an object point cloud model and the density of the sparse point clouds and the ICP convergence condition according to the point cloud density based on an ICP algorithm framework built in a PCL library.
Preferably, the efficiency of the solution of the objective function in the ICP algorithm flow is optimized based on a Google open-source Ceres convex optimization algorithm library.
The invention has the beneficial effects that:
1. the method supports and is suitable for a sparse object point cloud model with small point cloud density. Referring to truck model registration, the method can be applied to point cloud registration with sparsity degree of which the distance between points in the point cloud is up to 5 cm.
2. The method supports and is suitable for two object point cloud models with large difference parts. With reference to truck model registration, the present invention can accommodate differential model registration that resembles a 30% -40% portion of the car when the truck is unloaded.
3. The method improves the registration efficiency and reduces the time consumption. The invention is based on sectional type, only extracts the same part point cloud for registration, and simultaneously performs efficiency optimization based on Ceres. And testing under the condition that the number of the point clouds is 6 ten thousand points in a standard truck point cloud, and reducing the time spent on registering from PCL built-in ICP for two minutes to half minute.
Drawings
FIG. 1 is a flowchart of a method for registering a point cloud model of a moving object according to the present invention.
FIG. 2 is a no-load truck point cloud model in the moving object point cloud model registration method of the present invention.
FIG. 3 is a loaded truck point cloud model in the moving object point cloud model registration method of the present invention.
Detailed Description
The present invention is described in detail below with reference to the attached drawings.
As shown in fig. 1, a method for registering a point cloud model of a moving object includes the following steps of 1, inputting point cloud models obtained by scanning and reconstructing in two moving processes of the object with the same size; step 2, respectively extracting a plurality of distinct and same local point clouds in two models of the object with the same size, and performing primary registration on the two object models based on the extracted same local point clouds; step 3, adjusting the object model based on the registration result to obtain a primary matching result; and 4, respectively extracting the same parts of the two object model bodies, and performing final registration based on the extracted same parts of the objects.
As shown in fig. 2 and fig. 3, in step 1, the registration object of the input point cloud model is obtained by three-dimensional reconstruction after a large-size moving object is scanned by a laser radar during the moving process.
In step 2, matching of the input point Cloud model, taking the proportion of the significant same partial structures of the two object models in the overall structure as a primary matching reference, taking the difference of the distinguishing characteristics of the significant same partial structures in the overall model as a secondary matching reference, and accurately obtaining the significant same partial structures in the point Cloud model through PCL (Point Cloud library) based condition filtering.
In step 2, calculating the coordinates of the positions of the centroids of the structures of the parts with the same significance, adjusting the point cloud by taking the centroids as the origin of the coordinate system, coinciding the coordinates of the positions of the centroids of the two structures with the same significance, and performing ICP registration to realize the primary registration of the two object models. And obtaining rigid transformation matrixes of the two point clouds after the registration of the parts with the same significance is completed, wherein the matrixes describe parameters of the other part with the same significance in the three-dimensional space obtained by the rotation and translation of one part with the same significance, and one object model point cloud position is subjected to rigid transformation to the other model point cloud position of the object based on the rigid transformation matrixes.
In step 4, extracting other local same part point clouds except for the obviously same part structure, calculating and solving a point cloud normal vector, judging and removing part of point clouds of which the normal vector is vertical to a specific plane according to the normal vector, filtering the processed part of point clouds, performing voxel segmentation by adopting an octree, comparing each voxel of the two point clouds, reserving the same part of point clouds, obtaining two similar point clouds subjected to primary coarse registration, wherein the point clouds have smaller difference and roughly overlapped spatial positions, and dynamically adjusting the size of an object point cloud model and the density of the sparse point clouds according to the density of the point clouds based on an ICP (iterative closed point) algorithm frame arranged in a PCL (PCL) library. And optimizing the resolving efficiency of the objective function in the ICP algorithm flow based on a Google open-source Ceres convex optimization algorithm library.
Example 1
This example details the procedure of the matching method. Taking a truck as an example, the flow of the method is as follows.
Inputting object point cloud model data: the registration object applicable to the invention is obtained by three-dimensional reconstruction after the large-size moving object is scanned by the laser radar in the moving process. In the two scanning processes, an object may carry an additional object, so that the input data is a large-size sparse point cloud with two similar main parts and a larger difference part. Taking a common large truck as an example, the truck carries an excessive cargo part through scanning in a loading state, and the point cloud model of the unloaded truck and the loaded truck has a large difference part.
Extracting the same small part of the point cloud main body: based on the condition that the object still has the same small part with obvious characteristics although the object contains a large difference part, the object can be extracted to be subjected to primary registration so as to solve the problem that direct overall registration is affected by the difference part. Still taking a large truck as an example, it can be found by analysis that the resulting head portion is unchanged, whether the large truck is loaded or unloaded. Therefore, the vehicle head parts of the input no-load and loading models can be respectively extracted, the specific principle is that the position near the joint of the vehicle head and the carriage can be found based on the proportion of the vehicle head in the whole vehicle body, the position of the split vehicle head carriage can be found based on the characteristic that the height change of the joint of the vehicle head and the carriage is obvious, and finally the vehicle head can be successfully extracted from the truck model based on PCL conditional filtering.
Same fraction registration: taking the headstock extracted in the above step as an example, the headstock is used as a part of the object, and the headstock is a rigid part no matter the headstock is loaded or unloaded, and the headstock registration result can be used as a reference for overall registration of the object. The vehicle head registration process comprises the steps of calculating position coordinates of two vehicle head mass centers, adjusting point clouds by taking the mass centers as the origin of a coordinate system, performing ICP registration after the two mass centers are overlapped, and directly calling related implementation of an ICP algorithm in a PCL library when the quantity of partial point clouds at the vehicle head is small. And adjusting the object model based on the small part of the registration result to obtain a primary registration result: after the locomotive registration is completed, a rigid transformation matrix of the two point clouds is obtained, and the matrix describes how one locomotive rotates and translates in a three-dimensional space to obtain the parameters of the position of the other locomotive. Based on the matrix, the entire unloaded point cloud is rigidly transformed to a loaded point cloud location.
Extracting the same part of the object body: the object is large in size, and there are many other local identical parts in the subject, except for a relatively obvious small portion of the point cloud. And extracting all the point cloud data to finish the final point cloud registration. Still taking a truck as an example, when the truck is unloaded and loaded, the difference between the parts of the car is large, and the registration accuracy is affected by the difference between the whole part of the car and the registration. Therefore, the same part of the vehicle body is extracted for final registration, the specific extraction step is to calculate and solve a point cloud normal vector, and the point cloud of the part, the normal vector of which is vertical to the ground, is removed according to the judgment of the normal vector. Most of the carriage bottom and goods parts are removed through processing, voxel segmentation is carried out through an octree after filtering processing, each voxel of the two point clouds is compared, and the same partial point clouds are reserved.
Final registration is done based on extracting the same part of the object: after the processing of the previous step, two similar point clouds subjected to primary coarse registration are obtained, the difference between the point clouds is small, and the spatial positions are roughly overlapped. The final step at this point can be fine point cloud registration based on the framework of ICP registration algorithm. The registration algorithm is realized based on an ICP algorithm framework built in a PCL library, and the ICP convergence condition is dynamically adjusted according to the point cloud density and the size of the object point cloud model and the sparse point cloud density. Meanwhile, aiming at the time-consuming problem, the efficiency of the target function calculation in the ICP algorithm flow is optimized based on a Google open-source Ceres convex optimization algorithm library.
The method firstly extracts a small part with obvious characteristics for primary registration, then extracts the same part of the main body of the object for segmented secondary registration, and other methods generally perform registration based on the whole object point cloud. The invention is based on the registration optimization aiming at large-size and sparse point clouds, and other point cloud registration methods are generally only suitable for the registration of dense point clouds with small size in a small range. The method extracts the point clouds of the same part of the object body to carry out final registration, can avoid the point clouds containing larger difference parts, and is superior to other registration methods which are generally only suitable for scenes with high point cloud similarity.
The foregoing is only a preferred embodiment of the present invention, and many variations in the detailed description and the application range can be made by those skilled in the art without departing from the spirit of the present invention, and all changes that fall within the protective scope of the invention are therefore considered to be within the scope of the invention.

Claims (7)

1. A method for registering a point cloud model of a moving object is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
step 1, inputting a point cloud model obtained by scanning and reconstructing in the process of two movements of an object with the same size;
step 2, respectively extracting a plurality of distinct and same local point clouds in two models of the object with the same size, and performing primary registration on the two object models based on the extracted same local point clouds;
step 3, adjusting the object model based on the registration result to obtain a primary matching result;
and 4, respectively extracting the same parts of the two object model bodies, and performing final registration based on the extracted same parts of the objects.
2. The method of registering a point cloud model of a moving object according to claim 1, characterized in that: in step 1, the input point cloud model registration object is a large-size sparse point cloud model with a certain difference.
3. The method of registering a point cloud model of a moving object according to claim 2, characterized in that: in step 2, matching of the input point cloud model, taking the significant same parts of the two object models as a first matching input reference, the parts can be extracted by methods such as the proportion of the whole structure, and the like, taking the first matching result as the reference, taking the main body part with small object feature difference as the input reference of the second matching, the main body part can be obtained based on the voxel difference size after octree segmentation, and combining with the PCL conditional filtering, the same part structure required by the second registration can be accurately obtained.
4. The method of registering a point cloud model of a moving object according to claim 3, characterized in that: in step 2, calculating the coordinates of the positions of the centroids of the structures of the parts with the same significance, adjusting the point cloud by taking the centroids as the origin of the coordinate system, coinciding the coordinates of the positions of the centroids of the two structures with the same significance, and performing ICP registration to realize the primary registration of the two object models.
5. The method of registering a point cloud model of a moving object according to claim 4, characterized in that: in step 2, after the registration of the significantly identical partial structures is completed, a rigid transformation matrix of the two point clouds is obtained, the rigid transformation matrix describes parameters of the other significantly identical partial structure position obtained by performing rotational translation on one of the significantly identical partial structures in the three-dimensional space, and based on the rigid transformation matrix, the point cloud position of one object model is subjected to rigid transformation to the point cloud position of the other object model.
6. The method of registering a point cloud model of a moving object according to claim 5, characterized in that: in step 4, extracting other local same part point clouds except for the obviously same part structure, calculating and solving a point cloud normal vector, judging and removing part of the point clouds of which the normal vector is vertical to a specific plane according to the normal vector, filtering the processed part of the point clouds, performing voxel segmentation by adopting an octree, comparing each voxel of the two point clouds, reserving the same part of the point clouds, obtaining two similar point clouds subjected to primary coarse registration, wherein the point clouds have smaller difference and roughly overlapped spatial positions, and dynamically adjusting the size of an object point cloud model and the density of the sparse point clouds according to the point cloud density under the ICP convergence condition based on an ICP algorithm framework arranged in a PCL library.
7. The method of registering a point cloud model of a moving object according to claim 6, characterized in that: and optimizing the resolving efficiency of the objective function in the ICP algorithm flow based on a Google open-source Ceres convex optimization algorithm library.
CN202110848833.3A 2021-07-27 2021-07-27 Moving object point cloud model registration method Pending CN113781531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110848833.3A CN113781531A (en) 2021-07-27 2021-07-27 Moving object point cloud model registration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110848833.3A CN113781531A (en) 2021-07-27 2021-07-27 Moving object point cloud model registration method

Publications (1)

Publication Number Publication Date
CN113781531A true CN113781531A (en) 2021-12-10

Family

ID=78836083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110848833.3A Pending CN113781531A (en) 2021-07-27 2021-07-27 Moving object point cloud model registration method

Country Status (1)

Country Link
CN (1) CN113781531A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677322A (en) * 2021-12-30 2022-06-28 东北农业大学 Milk cow body condition automatic scoring method based on attention-guided point cloud feature learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060034545A1 (en) * 2001-03-08 2006-02-16 Universite Joseph Fourier Quantitative analysis, visualization and movement correction in dynamic processes
CN101645170A (en) * 2009-09-03 2010-02-10 北京信息科技大学 Precise registration method of multilook point cloud
CN104952107A (en) * 2015-05-18 2015-09-30 湖南桥康智能科技有限公司 Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data
CN109389635A (en) * 2018-09-11 2019-02-26 常州大学 A kind of coal yard excavation amount calculation method based on unmanned plane image sequence
CN110097582A (en) * 2019-05-16 2019-08-06 广西师范大学 A kind of spots cloud optimization registration and real-time display system and working method
CN110288640A (en) * 2019-06-28 2019-09-27 电子科技大学 Point cloud registration method based on convex density maximum
CN110874849A (en) * 2019-11-08 2020-03-10 安徽大学 Non-rigid point set registration method based on local transformation consistency
CN112802194A (en) * 2021-03-31 2021-05-14 电子科技大学 Nuclear facility high-precision reconstruction method based on point cloud data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060034545A1 (en) * 2001-03-08 2006-02-16 Universite Joseph Fourier Quantitative analysis, visualization and movement correction in dynamic processes
CN101645170A (en) * 2009-09-03 2010-02-10 北京信息科技大学 Precise registration method of multilook point cloud
CN104952107A (en) * 2015-05-18 2015-09-30 湖南桥康智能科技有限公司 Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data
CN109389635A (en) * 2018-09-11 2019-02-26 常州大学 A kind of coal yard excavation amount calculation method based on unmanned plane image sequence
CN110097582A (en) * 2019-05-16 2019-08-06 广西师范大学 A kind of spots cloud optimization registration and real-time display system and working method
CN110288640A (en) * 2019-06-28 2019-09-27 电子科技大学 Point cloud registration method based on convex density maximum
CN110874849A (en) * 2019-11-08 2020-03-10 安徽大学 Non-rigid point set registration method based on local transformation consistency
CN112802194A (en) * 2021-03-31 2021-05-14 电子科技大学 Nuclear facility high-precision reconstruction method based on point cloud data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘斌;等: "结合八叉树和最近点迭代算法的点云配准", 《测绘科学》, vol. 41, no. 2, 28 February 2016 (2016-02-28), pages 130 - 132 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677322A (en) * 2021-12-30 2022-06-28 东北农业大学 Milk cow body condition automatic scoring method based on attention-guided point cloud feature learning

Similar Documents

Publication Publication Date Title
CN109544456B (en) Panoramic environment sensing method based on two-dimensional image and three-dimensional point cloud data fusion
CN109903327B (en) Target size measurement method of sparse point cloud
CN107063228B (en) Target attitude calculation method based on binocular vision
CN104299260B (en) Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
CN110866969B (en) Engine blade reconstruction method based on neural network and point cloud registration
CN111696210A (en) Point cloud reconstruction method and system based on three-dimensional point cloud data characteristic lightweight
CN111598946B (en) Object pose measuring method and device and storage medium
CN111553858B (en) Image restoration method and system based on generation countermeasure network and application thereof
CN111640158B (en) End-to-end camera and laser radar external parameter calibration method based on corresponding mask
CN111612728B (en) 3D point cloud densification method and device based on binocular RGB image
CN113011388B (en) Vehicle outer contour size detection method based on license plate and lane line
CN116402866A (en) Point cloud-based part digital twin geometric modeling and error assessment method and system
CN115330958A (en) Real-time three-dimensional reconstruction method and device based on laser radar
CN112651944A (en) 3C component high-precision six-dimensional pose estimation method and system based on CAD model
CN113781531A (en) Moving object point cloud model registration method
CN116091727A (en) Complex Qu Miandian cloud registration method based on multi-scale feature description, electronic equipment and storage medium
CN116465327A (en) Bridge line shape measurement method based on vehicle-mounted three-dimensional laser scanning
Yuan et al. 3D point cloud recognition of substation equipment based on plane detection
CN110864671A (en) Robot repeated positioning precision measuring method based on line structured light fitting plane
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN116878524A (en) Dynamic SLAM dense map construction method based on pyramid L-K optical flow and multi-view geometric constraint
CN110969650A (en) Intensity image and texture sequence registration method based on central projection
CN115880371A (en) Method for positioning center of reflective target under infrared visual angle
CN111932635B (en) Image calibration method adopting combination of two-dimensional and three-dimensional vision processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination