CN107886528B - Distribution line operation scene three-dimensional reconstruction method based on point cloud - Google Patents
Distribution line operation scene three-dimensional reconstruction method based on point cloud Download PDFInfo
- Publication number
- CN107886528B CN107886528B CN201711242672.3A CN201711242672A CN107886528B CN 107886528 B CN107886528 B CN 107886528B CN 201711242672 A CN201711242672 A CN 201711242672A CN 107886528 B CN107886528 B CN 107886528B
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- registration
- dimensional reconstruction
- key
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a distribution line operation scene three-dimensional reconstruction method based on point cloud, which comprises the steps of obtaining part point cloud of a distribution line to be reconstructed, such as a lightning arrester, a cross arm and the like, by performing operations of conditional filtering, downsampling, outlier removing, segmentation and the like on initial point cloud, then extracting key points by adopting an SIFT 3D algorithm, constructing key point description vectors by using FPFH (field programmable gate array) characteristics, completing point cloud registration by adopting rough registration and improved ICP (inductively coupled plasma) precise registration to obtain complete three-dimensional point cloud of parts, namely establishing an offline model base, then collecting the point cloud in real time, sequentially registering the part point cloud with the model base model to complete three-dimensional reconstruction, and finally completing curved surface reconstruction by using a Poisson curved surface reconstruction method to obtain a three-dimensional model. According to the invention, semi-autonomous three-dimensional reconstruction of a distribution line scene is realized, manual intervention is reduced, SIFT 3D key point extraction and FPFH (field programmable gate flash) feature description vectors are adopted aiming at the important step of three-dimensional reconstruction, the quality of key points is ensured, weight is set for point pair relation, wrong point pairs are eliminated, the registration speed is accelerated, and the efficiency of three-dimensional reconstruction is improved.
Description
Technical Field
The invention relates to the field of live working robot environment perception, in particular to a distribution line working scene three-dimensional reconstruction method based on point cloud.
Background
With the development of robot technology, robots play an increasingly important role in various fields. The robot technology is applied to the power industry, the electric power maintenance and overhaul work is carried out instead of manpower, and the safety and the efficiency of operation can be improved to a great extent.
The robot is adopted to carry out live working, a teleoperation mode and an autonomous mode are generally adopted, and no matter which mode is adopted, three-dimensional reconstruction needs to be carried out on a working scene, so that firstly, visual presence is provided, and teleoperation personnel can carry out man-machine interaction operation with strong immersion based on virtual reality; secondly, the robot has scene perception capability and can carry out autonomous obstacle avoidance and motion planning
The point cloud-based three-dimensional reconstruction mainly comprises point cloud preprocessing, point cloud registration and curved surface reconstruction. The current commonly used registration method is to combine coarse registration and fine registration. For coarse registration, usually, geometric features of point clouds are extracted first to prepare for establishing a corresponding relationship of the point clouds, and the currently popular coarse registration method based on RANSAC has strong randomness, influences registration efficiency and reduces modeling speed. For the precise registration, an ICP algorithm is generally adopted, but the conventional ICP algorithm takes all point sets as point sets to be registered, and does not exclude wrong point pairs, which not only affects the speed of registration, but also reduces the precision of registration
Disclosure of Invention
The invention aims to provide a three-dimensional reconstruction method of a distribution line operation scene based on point cloud.
The technical scheme for realizing the purpose of the invention is as follows: a distribution line operation scene three-dimensional reconstruction method based on point cloud comprises the following steps:
step 1, collecting a working scene point cloud and carrying out preprocessing operation on the working scene point cloud;
step 2, segmenting the point cloud scene by adopting a color region growing method;
step 3, registering the point clouds under multiple visual angles by adopting a coarse registration method and a fine registration method;
step 4, establishing an offline model library comprising the lightning arrester and the cross arm;
step 5, performing real-time three-dimensional reconstruction on the operation scene;
and 6, performing curved surface reconstruction on the point cloud obtained in the step 5 by adopting a Poisson algorithm.
Compared with the prior art, the invention has the following remarkable advantages: aiming at the particularity of a distribution line scene, the invention adopts an effective preprocessing method to obtain the point cloud of the parts, optimizes the registration method and improves the speed of three-dimensional reconstruction. And for the distribution line operation scene, three-dimensional reconstruction can be performed semi-autonomously, so that manual intervention is reduced, and the efficiency of operation scene reconstruction is improved.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is a three-dimensional reconstruction flow chart of a distribution line operation scene based on point cloud.
Fig. 2 is a flow chart of an improved point cloud registration.
Fig. 3 is a schematic view of multi-view point cloud registration.
Fig. 4 is a flow chart of replacing the real-time local point cloud with a corresponding model.
Fig. 5 is a diagram of a distribution line scene preprocessing result.
Fig. 6 is a schematic diagram of distribution line scene point cloud segmentation.
Fig. 7 is a distribution line scene point cloud segmentation result diagram, wherein diagram (a) is a cross-arm point cloud, diagram (b) is a left arrester point cloud, and diagram (c) is a right arrester point cloud.
Fig. 8 is a schematic diagram of cross-arm registration at two viewing angles, where (a) is two cross-arm point clouds before registration and (b) is two cross-arm point clouds after registration.
Fig. 9 is a schematic view of registration of arresters at two viewing angles, where diagram (a) is two arrester point clouds before registration and diagram (b) is two arrester point clouds after registration.
Detailed Description
The following describes a specific embodiment of the distribution line operation scene three-dimensional reconstruction method based on point cloud with reference to the accompanying drawings:
for the stage of establishing a model base in an off-line mode, a visual collection device is held by hands to move around objects in a scene to collect point cloud data under multiple visual angles; for real-time display of the operation scene, the visual device is fixed outside the operation scene by a certain distance.
The distribution line operation scene three-dimensional reconstruction flow chart based on point cloud is shown in fig. 1, and comprises the following steps:
step 1, collecting operation scene point clouds and carrying out preprocessing operation on the operation scene point clouds; the method comprises the following specific steps:
step 1-1, selecting an interested area by adopting a conditional filtering method:
because the point cloud is a set of three-dimensional coordinates, the range of the point cloud in the x, y and z directions can be limited according to the prior knowledge, and the region where the scene is probably located is determined;
step 1-2, performing self-adaptive voxel down-sampling on the point cloud obtained in the step 1-1, and removing outliers; establishing a three-dimensional voxel grid for the point cloud, assuming that the edge length of a cube is linearly related to the average nearest neighbor Euclidean distance of the point cloud, and approximately representing the voxel by using the gravity center of a point set in each voxel to realize the down-sampling of the point cloud; removing sparse outliers by adopting a statistical-based method;
step 2, automatically dividing the red lightning arrester and the gray cross arm by adopting a color area increasing method;
step 3, registering the point clouds of the same object under two visual angles by adopting a coarse registration method and a fine registration method, as shown in fig. 2, and specifically comprising the following steps:
step 3-1, extracting key points of the point cloud by adopting an SIFT 3D algorithm, and calculating the FPFH (fuzzy programming frequency) characteristics of the key points, specifically:
step 3-1-1, detecting the characteristic points of the scale space, wherein the used scale space and Gaussian difference function are as follows:
scale space: l (x, y, z, σ) G (x, y, z, σ) P (x, y, z)
Gaussian difference function: d (x, y, z, k)1 iσ)=L(x,y,z,k1 (i+1)σ)-L(x,y,z,k1 iσ),i∈[0,s+2]
Wherein G (x, y, z, σ) is a Gaussian nucleus,p (x, y, z) is a point in the point cloud, σ is a scale space factor, k1Is a constant multiplication factor, and s is the number of layers in the pyramid group;
step 3-1-2, removing feature points and edge response points with low contrast, specifically:
substituting the characteristic points (x, y, z) into the Gaussian difference function, and if the absolute value of the obtained value is greater than the threshold tau1If not, the data is retained, otherwise, the data is removed; then removing the edge points;
3-1-3, determining the main direction of the key point; the amplitude m (x, y, z), the azimuth angle theta (x, y, z) and the pitch angle phi (x, y, z) from the key point and the k neighborhood point to the neighborhood center point are respectively as follows:
θ(x,y,z)=tan-1((yi-yc)/(xi-xc))
φ(x,y,z)=sin-1((zi-zc)/m(x,y,z))
wherein (x)i,yi,zi) (i ═ 1, 2., k, k +1) as the keypoint and its k neighborhood, (x) as the keypointc,yc,zc) Is the center point of the neighborhood; counting an azimuth angle theta (x, y, z) and a pitch angle phi (x, y, z) in a key point neighborhood by using a histogram, taking an amplitude value m (x, y, z) as a weight, and selecting a main peak value of the histogram as a main direction of a key point;
step 3-1-4, establishing FPFH characteristic description of key points:
wherein, PiIs a key point, PkIs close toNeighbors by PiAnd PkIs taken as the weight omegakSPFH is a simple point feature histogram;
step 3-2, carrying out rough registration based on sampling consistency on point clouds under different visual angles, wherein the specific process is as follows:
step 3-2-1, randomly selecting s key points from the source point cloud P, and ensuring that the distance between each point and each point is larger than a preset minimum distance dmin;
Step 3-2-2, for each key point siFinding and s in the target point cloud QiA set of points with similar FPFH characteristics, from which a point is randomly drawn to represent a sample point siThe corresponding point of (a);
step 3-2-3, estimating a rigid body transformation matrix by a set containing s point pairs, and evaluating the quality of rigid body transformation by calculating an error metric, wherein the error metric is usually calculated by a Huber evaluation formula:
wherein e isiRepresenting Euclidean distance, t, of the ith point pair after rigid body transformationeIs a constant number, Lh(ei) Error metric for the ith point pair;
3-2-4, if the error reaches the expected range or reaches the maximum iteration time m, ending the iteration process, otherwise, returning to the step 3-2-1;
3-3, performing fine registration based on an improved ICP algorithm on the point clouds under different viewing angles, specifically:
step 3-3-1, determining corresponding point pairs: searching a key point set { P) in a source point cloud P in a target point cloud QiCorresponding set of closest points qi};
wherein DistmaxIs the maximum value of the distances between all the point pairs, weightiGiving a threshold t for the weight of each point pair, if weightiIf t is less than t, the point is removed;
3-3-3, estimating a rotation matrix R and a translation matrix T by adopting an SVD (singular value decomposition) method, performing rotation transformation and translation transformation on the source point cloud P, and calculating an error and a function:
3-3-4, judging whether the error sum is smaller than a threshold tau, judging whether the maximum iteration number n is reached, finishing the fine registration if the maximum iteration number n is met, and returning to the 3-3-1 if the error sum is not smaller than the threshold tau;
step 4, establishing an off-line model library comprising a lightning arrester Q1And cross arm Q2(ii) a The method specifically comprises the following steps:
step 4-1, acquiring point clouds of an operation scene under multiple visual angles, and respectively obtaining arrester point clouds and cross arm point clouds under the multiple visual angles through the preprocessing of the step 1 and the point cloud segmentation of the step 2;
and 4-2, registering the point clouds under other visual angles to the visual angle through the step 3 by taking the visual angle 1 as a reference point cloud, and forming complete arrester point cloud and cross arm point cloud, namely completing the construction of an offline model library. As shown in fig. 3;
step 5, finishing the real-time three-dimensional reconstruction of the operation scene; as shown in fig. 4, the specific steps are as follows:
step 5-1, collecting scene point cloud data in real time;
step 5-2, carrying out the pretreatment of the step 1, and carrying out automatic segmentation by adopting the method of the step 2;
step 5-3, dividing the result PiRespectively adopting the method of the step 3 and the model library lightning arrester point cloud Q1And cross-arm point cloud Q2Carrying out registration to obtain registration error eij(j=1,2);
Step 5-4, taking the model with small registration error result as PiAnd performing rigid body transformation on the model point cloud to replace the current point cloud Pi
And 6, performing curved surface reconstruction on the point cloud obtained in the step 5 by adopting a Poisson curved surface reconstruction algorithm.
According to the invention, semi-autonomous three-dimensional reconstruction of a distribution line scene is realized, manual intervention is reduced, SIFT 3D key point extraction and FPFH (field programmable gate flash) feature description vectors are adopted aiming at the important step of three-dimensional reconstruction, the quality of key points is ensured, weight is set for point pair relation, wrong point pairs are eliminated, the registration speed is accelerated, and the efficiency of three-dimensional reconstruction is improved.
The present invention will be described in further detail with reference to examples.
Examples
(1) Object
And aiming at a simulated real distribution line scene built according to the power standard, scene point cloud is collected through kinect 2.
(2) Results of the process
The method of the invention is characterized by three processes: preprocessing, segmenting and registering, and the experimental simulation effect of the method is shown in the following three aspects.
The operation can select an interested area in an operation scene, uniformly sample originally dense point clouds to ensure that the number of the point clouds is proper, and remove the interference caused by noise.
Then, the automatic segmentation of the lightning arrester and the cross arm is realized by adopting a color region growing method, a segmentation effect graph is shown in fig. 6, and a segmentation result graph is shown in fig. 7.
The point clouds of the same object under different visual angles are registered to the same visual angle, and schematic diagrams before and after the registration of the cross arm and the lightning arrester are respectively shown in fig. 8 and 9 by using the method.
Compared with a method based on RANSAC coarse registration and combined with traditional ICP registration, the method for comparing the accuracy and the speed of lightning arrester registration is shown in Table 1
TABLE 1
As shown in Table 1, the method of the invention has shorter time and higher registration precision.
(3) Results
Based on the preprocessing operation, the point cloud segmentation and the point cloud registration technology, a complete off-line model of each part is established, real-time data is registered with the off-line model, and finally three-dimensional reconstruction of a distribution line operation scene is completed.
Claims (3)
1. A distribution line operation scene three-dimensional reconstruction method based on point cloud is characterized by comprising the following steps:
step 1, collecting operation scene point clouds and carrying out pretreatment operation on the operation scene point clouds, specifically carrying out condition filtering, down-sampling and outlier removal treatment;
step 2, segmenting the point cloud scene by adopting a color region growing method;
step 3, registering the point clouds under multiple visual angles by adopting a coarse registration method and a fine registration method, which specifically comprises the following steps:
step 3-1, extracting key points of the point cloud by adopting an SIFT 3D algorithm, and calculating the FPFH (fuzzy programming frequency) characteristics of the key points, specifically:
step 3-1-1, detecting the characteristic points of the scale space, wherein the used scale space and Gaussian difference function are as follows:
scale space: l (x, y, z, σ) G (x, y, z, σ) P (x, y, z)
Gaussian difference function: d (x, y, z, k)1 iσ)=L(x,y,z,k1 (i+1)σ)-L(x,y,z,k1 iσ),i∈[0,s+2]
Wherein G (x, y, z, σ) is a Gaussian nucleus,p (x, y, z) is a point in the point cloud, σ is a scale space factor, k1Is a constant multiplication factor, and s is the number of layers in the pyramid group;
step 3-1-2, removing feature points and edge response points with low contrast, specifically:
substituting the characteristic points (x, y, z) into the Gaussian difference function, if the absolute value of the obtained value is larger than the threshold valueτ1If not, the data is retained, otherwise, the data is removed; then removing the edge points;
3-1-3, determining the main direction of the key point; the amplitude m (x, y, z), the azimuth angle theta (x, y, z) and the pitch angle phi (x, y, z) from the key point and the k neighborhood point to the neighborhood center point are respectively as follows:
θ(x,y,z)=tan-1((yi-yc)/(xi-xc))
φ(x,y,z)=sin-1((zi-zc)/m(x,y,z))
wherein (x)i,yi,zi) (i ═ 1, 2., k, k +1) as the keypoint and its k neighborhood, (x) as the keypointc,yc,zc) Is the center point of the neighborhood; counting an azimuth angle theta (x, y, z) and a pitch angle phi (x, y, z) in a key point neighborhood by using a histogram, taking an amplitude value m (x, y, z) as a weight, and selecting a main peak value of the histogram as a main direction of a key point;
step 3-1-4, establishing FPFH characteristic description of key points:
wherein, PiIs a key point, PkBeing neighbors, with PiAnd PkIs taken as the weight omegakSPFH is a simple point feature histogram;
step 3-2, carrying out rough registration based on sampling consistency on point clouds under different visual angles, wherein the specific process is as follows:
step 3-2-1, randomly selecting s key points from the source point cloud P, and ensuring that the distance between each point and each point is larger than a preset minimum distance dmin;
Step 3-2-2, for each key point siFinding and s in the target point cloud QiSet of points with similar FPFH characteristics, from which a point is randomly extractedTo represent a sample point siThe corresponding point of (a);
step 3-2-3, estimating a rigid body transformation matrix by a set containing s point pairs, and evaluating the quality of rigid body transformation by calculating an error metric, wherein the error metric is usually calculated by a Huber evaluation formula:
wherein e isiRepresenting Euclidean distance, t, of the ith point pair after rigid body transformationeIs a constant number, Lh(ei) Error metric for the ith point pair;
3-2-4, if the error reaches the expected range or reaches the maximum iteration time m, ending the iteration process, otherwise, returning to the step 3-2-1;
3-3, performing fine registration based on an improved ICP algorithm on the point clouds under different viewing angles, specifically:
step 3-3-1, determining corresponding point pairs: searching a key point set { P) in a source point cloud P in a target point cloud QiCorresponding set of closest points qi};
wherein DistmaxIs the maximum value of the distances between all the point pairs, weightiGiving a threshold t for the weight of each point pair, if weightiIf t is less than t, the point is removed;
3-3-3, estimating a rotation matrix R and a translation matrix T by adopting an SVD (singular value decomposition) method, performing rotation transformation and translation transformation on the source point cloud P, and calculating an error and a function:
3-3-4, judging whether the error sum is smaller than a threshold tau, judging whether the maximum iteration number n is reached, finishing the fine registration if the maximum iteration number n is met, and returning to the 3-3-1 if the error sum is not smaller than the threshold tau;
step 4, establishing an offline model library, which comprises the lightning arrester and the cross arm;
step 5, performing real-time three-dimensional reconstruction on the operation scene;
and 6, performing curved surface reconstruction on the point cloud obtained in the step 5 by adopting a Poisson algorithm.
2. The point cloud-based three-dimensional reconstruction method for distribution line operation scene according to claim 1, wherein step 4 is to establish an offline model library including a lightning arrester Q1And cross arm Q2The method specifically comprises the following steps:
step 4-1, acquiring point clouds of an operation scene under multiple visual angles, and respectively obtaining arrester point clouds and cross arm point clouds under the multiple visual angles through the preprocessing of the step 1 and the point cloud segmentation of the step 2;
and 4-2, registering the point clouds under other visual angles to the visual angle through the step 3 by taking the visual angle 1 as a reference point cloud, and forming complete arrester point cloud and cross arm point cloud, namely completing the construction of an offline model library.
3. The point cloud-based three-dimensional reconstruction method for the distribution line work scene according to claim 1, wherein the step 5 is used for performing real-time three-dimensional reconstruction on the work scene, and specifically comprises the following steps:
step 5-1, collecting scene point cloud data in real time;
step 5-2, preprocessing the acquired data in the step 1, and then automatically segmenting by adopting the method in the step 2;
step 5-3, dividing the result PiRespectively adopting the method of the step 3 and the model library lightning arrester point cloud Q1And cross-arm point cloud Q2Carrying out registration to obtain registration error eij(j=1,2);
Step 5-4, taking the model with small registration error result as PiAnd performing rigid body transformation on the model point cloud to replace the current point cloud Pi。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711242672.3A CN107886528B (en) | 2017-11-30 | 2017-11-30 | Distribution line operation scene three-dimensional reconstruction method based on point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711242672.3A CN107886528B (en) | 2017-11-30 | 2017-11-30 | Distribution line operation scene three-dimensional reconstruction method based on point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107886528A CN107886528A (en) | 2018-04-06 |
CN107886528B true CN107886528B (en) | 2021-09-03 |
Family
ID=61776361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711242672.3A Active CN107886528B (en) | 2017-11-30 | 2017-11-30 | Distribution line operation scene three-dimensional reconstruction method based on point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107886528B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537805B (en) * | 2018-04-16 | 2021-09-21 | 中北大学 | Target identification method based on feature geometric benefits |
CN108872991A (en) * | 2018-05-04 | 2018-11-23 | 上海西井信息科技有限公司 | Target analyte detection and recognition methods, device, electronic equipment, storage medium |
CN109272537B (en) * | 2018-08-16 | 2021-08-13 | 清华大学 | Panoramic point cloud registration method based on structured light |
CN109472816B (en) * | 2018-09-17 | 2021-12-28 | 西北大学 | Point cloud registration method |
CN109345523B (en) * | 2018-09-21 | 2022-08-16 | 中国科学院苏州生物医学工程技术研究所 | Surface defect detection and three-dimensional modeling method |
CN109559340B (en) * | 2018-11-29 | 2023-06-09 | 东北大学 | Parallel three-dimensional point cloud data automatic registration method |
CN111311651B (en) * | 2018-12-11 | 2023-10-20 | 北京大学 | Point cloud registration method and device |
CN109934859B (en) * | 2019-03-18 | 2023-03-24 | 湖南大学 | ICP (inductively coupled plasma) registration method based on feature-enhanced multi-dimensional weight descriptor |
CN110039561B (en) * | 2019-05-14 | 2022-10-14 | 南京理工大学 | Live working robot teleoperation personnel training system and method based on point cloud |
CN110097582B (en) * | 2019-05-16 | 2023-03-31 | 广西师范大学 | Point cloud optimal registration and real-time display system and working method |
CN110415342B (en) * | 2019-08-02 | 2023-04-18 | 深圳市唯特视科技有限公司 | Three-dimensional point cloud reconstruction device and method based on multi-fusion sensor |
CN110555909B (en) * | 2019-08-29 | 2023-06-27 | 中国南方电网有限责任公司 | Power transmission tower model construction method, device, computer equipment and storage medium |
CN110580703B (en) * | 2019-09-10 | 2024-01-23 | 广东电网有限责任公司 | Distribution line detection method, device, equipment and storage medium |
CN110942515A (en) * | 2019-11-26 | 2020-03-31 | 北京迈格威科技有限公司 | Point cloud-based target object three-dimensional computer modeling method and target identification method |
CN111612841B (en) * | 2020-06-22 | 2023-07-14 | 上海木木聚枞机器人科技有限公司 | Target positioning method and device, mobile robot and readable storage medium |
CN112862878B (en) * | 2021-02-07 | 2024-02-13 | 浙江工业大学 | Mechanical arm blank repairing method based on 3D vision |
CN115063458A (en) * | 2022-07-27 | 2022-09-16 | 武汉工程大学 | Material pile volume calculation method based on three-dimensional laser point cloud |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299260A (en) * | 2014-09-10 | 2015-01-21 | 西南交通大学 | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration |
CN104657990A (en) * | 2015-02-06 | 2015-05-27 | 北京航空航天大学 | Two-dimensional contour fast registration method |
CN104715254A (en) * | 2015-03-17 | 2015-06-17 | 东南大学 | Ordinary object recognizing method based on 2D and 3D SIFT feature fusion |
CN104966287A (en) * | 2015-06-08 | 2015-10-07 | 浙江大学 | Hierarchical multi-piece point cloud rigid registration method |
CN105469388A (en) * | 2015-11-16 | 2016-04-06 | 集美大学 | Building point cloud registration algorithm based on dimension reduction |
CN105654483A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Three-dimensional point cloud full-automatic registration method |
CN106296693A (en) * | 2016-08-12 | 2017-01-04 | 浙江工业大学 | Based on 3D point cloud FPFH feature real-time three-dimensional space-location method |
CN206091522U (en) * | 2016-09-26 | 2017-04-12 | 安徽华电工程咨询设计有限公司 | Existing single loop strain insulator iron tower cable draws lower beam |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10426372B2 (en) * | 2014-07-23 | 2019-10-01 | Sony Corporation | Image registration system with non-rigid registration and method of operation thereof |
US9589355B2 (en) * | 2015-03-16 | 2017-03-07 | Here Global B.V. | Guided geometry extraction for localization of a device |
-
2017
- 2017-11-30 CN CN201711242672.3A patent/CN107886528B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299260A (en) * | 2014-09-10 | 2015-01-21 | 西南交通大学 | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration |
CN104657990A (en) * | 2015-02-06 | 2015-05-27 | 北京航空航天大学 | Two-dimensional contour fast registration method |
CN104715254A (en) * | 2015-03-17 | 2015-06-17 | 东南大学 | Ordinary object recognizing method based on 2D and 3D SIFT feature fusion |
CN104966287A (en) * | 2015-06-08 | 2015-10-07 | 浙江大学 | Hierarchical multi-piece point cloud rigid registration method |
CN105469388A (en) * | 2015-11-16 | 2016-04-06 | 集美大学 | Building point cloud registration algorithm based on dimension reduction |
CN105654483A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Three-dimensional point cloud full-automatic registration method |
CN106296693A (en) * | 2016-08-12 | 2017-01-04 | 浙江工业大学 | Based on 3D point cloud FPFH feature real-time three-dimensional space-location method |
CN206091522U (en) * | 2016-09-26 | 2017-04-12 | 安徽华电工程咨询设计有限公司 | Existing single loop strain insulator iron tower cable draws lower beam |
Non-Patent Citations (1)
Title |
---|
一种ICP改进算法;王君等;《重庆理工大学学报(自然科学)》;20111031;第25卷(第10期);第71-75页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107886528A (en) | 2018-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107886528B (en) | Distribution line operation scene three-dimensional reconstruction method based on point cloud | |
CN109658413B (en) | Method for detecting grabbing position of robot target object | |
CN110340891B (en) | Mechanical arm positioning and grabbing system and method based on point cloud template matching technology | |
CN112070818B (en) | Robot disordered grabbing method and system based on machine vision and storage medium | |
CN106709950B (en) | Binocular vision-based inspection robot obstacle crossing wire positioning method | |
CN110223345B (en) | Point cloud-based distribution line operation object pose estimation method | |
CN111028292B (en) | Sub-pixel level image matching navigation positioning method | |
Song et al. | CAD-based pose estimation design for random bin picking using a RGB-D camera | |
CN107481274B (en) | Robust reconstruction method of three-dimensional crop point cloud | |
CN108171715B (en) | Image segmentation method and device | |
CN112907735B (en) | Flexible cable identification and three-dimensional reconstruction method based on point cloud | |
CN108022262A (en) | A kind of point cloud registration method based on neighborhood of a point center of gravity vector characteristics | |
CN111178138B (en) | Distribution network wire operating point detection method and device based on laser point cloud and binocular vision | |
CN107490356B (en) | Non-cooperative target rotating shaft and rotation angle measuring method | |
CN105046694A (en) | Quick point cloud registration method based on curved surface fitting coefficient features | |
CN113781561B (en) | Target pose estimation method based on self-adaptive Gaussian weight quick point feature histogram | |
CN112669385A (en) | Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics | |
CN109410248B (en) | Flotation froth motion characteristic extraction method based on r-K algorithm | |
CN114972377A (en) | 3D point cloud segmentation method and device based on moving least square method and hyper-voxels | |
CN110766782A (en) | Large-scale construction scene real-time reconstruction method based on multi-unmanned aerial vehicle visual cooperation | |
CN110097598A (en) | A kind of three-dimension object position and orientation estimation method based on PVFH feature | |
CN113362463A (en) | Workpiece three-dimensional reconstruction method based on Gaussian mixture model | |
CN116958264A (en) | Bolt hole positioning and pose estimation method based on three-dimensional vision | |
CN112434559A (en) | Robot identification and positioning method | |
CN116452604A (en) | Complex substation scene segmentation method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |