CN113927606B - Robot 3D vision grabbing method and system - Google Patents

Robot 3D vision grabbing method and system Download PDF

Info

Publication number
CN113927606B
CN113927606B CN202111560940.2A CN202111560940A CN113927606B CN 113927606 B CN113927606 B CN 113927606B CN 202111560940 A CN202111560940 A CN 202111560940A CN 113927606 B CN113927606 B CN 113927606B
Authority
CN
China
Prior art keywords
point cloud
robot
grabbed
model
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111560940.2A
Other languages
Chinese (zh)
Other versions
CN113927606A (en
Inventor
卿黎明
李婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shibite Robot Co Ltd
Original Assignee
Hunan Shibite Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shibite Robot Co Ltd filed Critical Hunan Shibite Robot Co Ltd
Priority to CN202111560940.2A priority Critical patent/CN113927606B/en
Publication of CN113927606A publication Critical patent/CN113927606A/en
Application granted granted Critical
Publication of CN113927606B publication Critical patent/CN113927606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of 3D vision, and discloses a robot 3D vision grabbing method and system.

Description

Robot 3D vision grabbing method and system
Technical Field
The invention relates to the technical field of 3D vision, in particular to a robot 3D vision grabbing method and system.
Background
With the rise of artificial intelligence wave, the robot plays an increasingly important role in various industries. In recent years, industrial robots are widely applied to operations such as stacking, welding, carrying, assembling, painting and the like, and for the robots, grabbing is an indispensable skill for the robots to walk into the real world, for example, objects are sorted in the logistics industry, and parts are assembled on an industrial production line. Therefore, how to realize the part grabbing function of the robot becomes a problem to be solved urgently.
Disclosure of Invention
The invention provides a robot 3D vision grabbing method and system, which aim to solve the problems in the prior art.
In order to achieve the purpose, the invention is realized by the following technical scheme:
in a first aspect, the present invention provides a robot 3D vision grasping method, including:
acquiring actual point cloud data of a part to be grabbed at a robot standard photographing position;
preprocessing the first actual point cloud data to obtain first scene point cloud data;
roughly matching the first scene point cloud data with a first preset model point cloud to obtain a plurality of candidate poses of the part to be grabbed;
performing fine matching on the candidate poses and poses in the first preset model point cloud to obtain a transformation matrix between the part to be grabbed and the model part;
calculating the coordinates of the positions to be grabbed of the parts to be grabbed according to the transformation matrix and the positions of the model parts, and grabbing the parts to be grabbed based on the coordinates of the positions to be grabbed;
after a part to be grabbed is grabbed, acquiring second actual point cloud data shot by a deviation correcting camera, wherein the second actual point cloud data comprises a first position relation between a robot gripper and the part to be grabbed;
preprocessing the second actual point cloud data to obtain second scene point cloud data;
matching the second scene point cloud data with a second preset model point cloud, wherein the second preset model point cloud comprises a second position relation between the robot hand grip and the model part;
determining deviation information between the first position relation and the second position relation under the condition that the first position relation and the second position relation have deviation, wherein the deviation information comprises position movement information between second scene point cloud data and a second preset model point cloud;
calculating the placing position coordinates of the part to be grabbed according to the deviation information and the placing position coordinates of the model part;
the coordinate R of the position to be grabbed 2 Satisfies the following relation:
R 2 =R p2 ×E n ×M 2 ×M 1 -1 ×E s -1 ×R p1 -1 ×R 1
in the formula, R p1 A reference photographing point of the robot is represented, the reference photographing point of the robot is a photographing point of a standard photographing position of the robot, R p2 Representing the photographing position of the part to be grasped, E s Camera external parameters expressed as a standard model, E n The representation refers to camera external parameters when a part to be grabbed is grabbed, P1 represents the pose of a model part in the captured point cloud relative to the model, P2 represents the pose of the part to be grabbed in the captured point cloud relative to the model, M 1 Representing the pose transformation matrix between P1 and the model, M 2 Representing the pose transformation matrix, R, between P2 and the model 1 Representing the coordinates of the gripping positions of the model parts, where M 2 Derived from the fine registration;
the preprocessing the first actual point cloud data to obtain target point cloud data comprises the following steps:
filtering the first actual point cloud data, and performing down-sampling processing on the result of the filtering processing to obtain target point cloud data;
the rough matching of the first scene point cloud data and a first preset model point cloud is carried out to obtain a plurality of candidate poses of the part to be grabbed, and the rough matching comprises the following steps:
calculating global point pair characteristics of the first scene point cloud data, and establishing a hash table as a global model of the target point cloud data by taking the characteristics as keys and point pairs as values;
and carrying out local matching on the global model of the first scene point cloud data and a first preset model point cloud to obtain a plurality of candidate poses.
In a second aspect, the present application provides a robotic 3D vision system comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of the first aspect when executing the computer program.
Has the beneficial effects that:
according to the robot 3D vision grabbing method provided by the invention, the standard postures of the model parts at the standard photographing position and the standard grabbing position of the robot are firstly calculated, then the transformation relation between the parts to be grabbed and the standard postures is obtained through comparison, and finally the correct grabbing coordinates of the robot are reversely obtained, so that the accurate grabbing is completed, and the grabbing precision of the robot can be improved.
Drawings
Fig. 1 is a flowchart of a robot 3D vision capture method according to a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of a preferred embodiment of the present invention after a filtering process;
FIG. 3 is one of the schematic diagrams of the matching process of the preferred embodiment of the present invention;
FIG. 4 is a second schematic diagram of the matching process of the preferred embodiment of the present invention;
FIG. 5 is a third schematic diagram illustrating the matching process according to the preferred embodiment of the present invention;
FIG. 6 is a schematic view of a robot according to a preferred embodiment of the present invention;
FIG. 7 is an enlarged view of a capture camera of a preferred embodiment of the present invention;
FIG. 8 is a schematic view of a robotic work node according to a preferred embodiment of the present invention;
FIG. 9 is a second schematic view of the robot according to the preferred embodiment of the present invention;
FIG. 10 is an enlarged view of a rectification camera according to a preferred embodiment of the present invention.
Detailed Description
The technical solutions of the present invention are described clearly and completely below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, an embodiment of the present application provides a robot 3D vision capture method, including:
acquiring actual point cloud data of a part to be grabbed at a standard photographing position of the robot;
preprocessing the first actual point cloud data to obtain first scene point cloud data;
roughly matching the first scene point cloud data with a first preset model point cloud to obtain a plurality of candidate poses of the part to be grabbed;
carrying out fine matching on a plurality of candidate poses and poses in the first preset model point cloud to obtain a transformation matrix between the part to be grabbed and the model part;
and calculating the coordinates of the positions to be grabbed of the parts to be grabbed according to the transformation matrix and the positions of the model parts, and grabbing the parts to be grabbed based on the coordinates of the positions to be grabbed.
In this embodiment, the first preset model point cloud is a model of a model part in a standard posture of a robot standard photographing position and a standard grabbing position.
According to the robot 3D vision grabbing method, the standard postures of the model parts at the standard photographing position and the standard grabbing position of the robot are firstly calculated, the transformation relation between the parts to be grabbed and the standard postures is calculated, the correct grabbing coordinates of the robot are finally reversely calculated, accurate grabbing is completed, and grabbing precision of the robot can be improved.
Optionally, the preprocessing the first actual point cloud data to obtain target point cloud data includes:
and filtering the first actual point cloud data, and performing downsampling processing on the result of the filtering processing to obtain target point cloud data. Therefore, noise mixed in the point cloud can be eliminated through filtering processing, and the target characteristics can be identified and extracted. Due to the mass and disorder of the point cloud, the direct processing method requires higher calculation cost when searching the neighborhood. Further, by down-sampling the point cloud, the operation on all the point clouds is converted to the down-sampled point, and the amount of calculation can be reduced.
As shown in fig. 2, optionally, the filtering process is performed on the first actual point cloud data, and includes:
establishing a target point cloud space coordinate system, setting a channel based on the target point cloud space coordinate system, and removing points of the picture point cloud data outside the channel range to obtain a first filtering result;
calculating the average distance from each point of the first filtering result to all the adjacent points of the first filtering result, and eliminating the points with the average distance outside the preset standard range from the first filtering result to obtain a second filtering result;
dividing the second filtering result into a plurality of regions, designating a seed point as a growth starting point for each region, then comparing the points of adjacent regions of the seed points with the seed points, calculating the similarity, combining the points of which the similarity exceeds a threshold value into the same region, and iterating until the points which do not meet the condition are combined into the region to obtain a third filtering result;
and eliminating the point clouds which belong to the range outside the preset three-dimensional box range in the third filtering result to obtain a fourth filtering result.
In this optional embodiment, noise mixed in the point cloud can be more effectively eliminated by the multi-stage filtering joint processing.
Optionally, the down-sampling comprises any one of uniform down-sampling, voxel down-sampling, curvature down-sampling, and poisson disk sampling.
Wherein, the uniform down-sampling may refer to taking one point out of every k points according to the distance between the point and the point. Voxel down-sampling may refer to voxelization of a three-dimensional space, sampling a point in each voxel, and using the center point or the closest point to the center as the sampling point. Curvature down-sampling may refer to assigning weights according to the magnitude of the curvature of each point and then sampling. Poisson disk sampling can mean that enough grids are generated in space, the distance between two non-contact grids is larger than a sampling radius, and the grids are enough in number to ensure that each grid can meet the sampling number by installing one sampling point at most. By adopting the down-sampling method, the point cloud data volume can be greatly reduced, and the calculated amount in the subsequent steps can be reduced.
Optionally, roughly matching the first scene point cloud data with the first preset model point cloud to obtain a plurality of candidate poses of the part to be grabbed, including:
calculating global point pair characteristics of the first scene point cloud data, and establishing a hash table as a global model of the target point cloud data by taking the characteristics as keys and point pairs as values;
and carrying out local matching on the global model of the first scene point cloud data and the first preset model point cloud to obtain a plurality of candidate poses.
In this alternative embodiment, as shown in fig. 3-5, the coarse registration algorithm is split into two parts, global modeling and local matching. The global modeling is mainly characterized in that a global point pair characteristic is calculated, a characteristic set is extracted from a model, a hash table is established, the characteristic is a key, and the point pair (set) is a value. In the local matching process, sampling points are selected in a scene to form a plurality of point pairs, similar point pairs in the model are searched for each point pair, and a transformation matrix of the point pairs is calculated; and selecting a plurality of transformation matrixes with optimal effects from all the transformation matrixes to obtain a plurality of candidate poses.
Optionally, the fine matching of the candidate poses with the pose in the first preset model point cloud is performed to obtain a transformation matrix between the part to be grabbed and the model part, and the method includes:
setting an initial transformation matrix of fine registration;
and calculating corresponding points in the scene point cloud of each candidate pose according to the initial transformation matrix, searching the closest point to the corresponding points in the first preset model point cloud, finally recalculating the transformation matrix, and iterating for multiple times until a target threshold is met to obtain an optimal transformation matrix.
In this alternative embodiment, first, an initial transformation matrix of the fine registration is set; and then, calculating corresponding points in the scene point cloud according to the initial transformation matrix, searching the closest point to the corresponding points in the model, finally recalculating the transformation matrix, and iterating for multiple times until a target threshold is met to obtain an optimal transformation matrix. The condition that the target threshold is met means that the average distance between the scene point cloud and the corresponding point set is smaller than a given threshold.
It should be noted that when the degree of overlap between two groups of point clouds is high, the fine registration can achieve better registration by finding the closest point. However, in the data acquisition process, the scene randomness is large, the initial position difference of the point cloud is large, and the fine registration is easy to fall into local optimization. Therefore, the invention takes a plurality of candidate poses obtained by the coarse registration algorithm as initial parameters of the fine registration, and can obtain a global optimal transformation matrix. As shown in fig. 5, the point cloud registration image obtained by the coarse registration algorithm is optimized by the fine registration algorithm, and then the registration effect is obviously improved.
Optionally, a coordinate R of a bit to be grabbed 2 Satisfies the following relation:
R 2 =R p2 ×E n ×M 2 ×M 1 -1 ×E s -1 ×R p1 -1 ×R 1
in the formula, R p1 A reference photographing point of the robot is represented, the reference photographing point of the robot is a photographing point of a standard photographing position of the robot, R p2 Representing the photographing position of the part to be grabbed, E representing the external parameter of the camera, E s Is an external reference of the camera as a standard model, E n The method is characterized in that camera external parameters are adopted when a part to be grabbed is grabbed, P1 represents the pose of a model part relative to a model in a captured point cloud, P2 represents the pose of the part to be grabbed relative to the model in the captured point cloud, and M 1 Representing the pose transformation matrix between P1 and the model, M 2 Representing the pose transformation matrix, R, between P2 and the model 1 Grasping position for representing model partCoordinates where M 2 Resulting from the fine registration.
In one example, the coordinates O of the part to be grabbed of the model part relative to the robot are first calculated 1 ,O 2
O 1 =R p1 ×E s ×P 1
O 2 =R p2 ×E n ×P 2
Wherein, the first and the second end of the pipe are connected with each other,
P 1 =M 1 ×Model;
in the formula, R p1 Indicating the reference photographing point of the robot, R p2 The shooting position of the robot after the part to be grabbed is shown; e represents external parameters of the camera and is used for converting the position pose in the point cloud into the position pose under the robot coordinate system; p 1 And P 2 Respectively representing the pose of the part in the captured point cloud relative to the model, M 1 Represents the pose transformation matrix between P1 and the model, M 2 And representing a pose transformation matrix between the P2 and the model.
Because the relative position of the part and the robot in the grabbing process is unchanged, the robot transformation matrix T is consistent with the part transformation matrix, and the following can be obtained:
T=O 2 ×O 1 -1
the robot grabs the grabbing position coordinate R of the part to be grabbed 2
R 2 =T×R 1
That is to say that the temperature of the molten steel,
R 2 =R p2 ×E n ×M 2 ×M 1 -1 ×E s -1 ×R p1 -1 ×R 1
in the formula, R1 represents a grabbing position coordinate after the robot grabs the part.
In an actual working environment, the robot obtains the coordinates of the grabbing positions through calculation, so that the robot reaches the accurate grabbing positions and accurately grabs the parts.
It should be understood that the robot 3D vision gripping method described above may be applied to a robot, wherein the robot is schematically illustrated in fig. 6-7.
In conclusion, the invention provides an algorithm for 3D grabbing by an industrial robot, which can realize that the robot accurately grabs parts and provides technical support for intelligent manufacturing of production lines in various industries.
Further, after the part to be grabbed is grabbed by adopting the robot 3D visual grabbing method, second actual point cloud data shot by a deviation correcting camera are obtained, wherein the second actual point cloud data comprise a first position relation between a robot gripper and the part to be grabbed;
preprocessing the second actual point cloud data to obtain second scene point cloud data;
matching the second scene point cloud data with a second preset model point cloud, wherein the second preset model point cloud comprises a second position relation between the robot hand grip and the model part;
determining deviation information between the first position relation and the second position relation under the condition that the first position relation and the second position relation have deviation, wherein the deviation information comprises position movement information between second scene point cloud data and a second preset model point cloud;
and calculating the placing position coordinates of the part to be grabbed according to the deviation information and the placing position coordinates of the model part.
In this embodiment, the preprocessing mode is the same as the preprocessing mode in the robot 3D visual capture method, and details are not repeated here.
It should be understood that there may be different situations between the positional relationship between the model part and the robot gripper and between the part to be gripped and the robot gripper, for example, if the robot moves to the position of the robot when placing the model part, the place where the part to be gripped is also located is located at the front, and thus a placement error may occur, in this embodiment, the placement position coordinate of the part to be gripped is calculated according to the deviation information and the placement position coordinate of the model part; coordinate information of the robot when placing the part to be grabbed is corrected according to the coordinate of the placing position of the part to be grabbed, so that the working precision of the robot in the grabbing and placing processes can be ensured, and the intelligent degree of the robot is improved.
That is, the correction in this embodiment is to correct the error, and the robot is controlled not to move directly to the standard position, but to calculate a new robot placement position, and the robot may move to the calculated placement position.
Optionally, the placing position coordinate R of the part to be grabbed 2 ', satisfies the following relation:
Figure BDF0000017937350000071
in the formula, R 1 、R 2 Respectively representing a reference photographing point and a discharging point of the robot, R 1 '、R 2 ' respectively indicating a photographing position and a placing position after the robot grabs the part to be grabbed; o is 1 Indicating the part reference pose when the robot walks to the reference photographing point, O 2 Indicating the part reference pose, O, when the robot walks to the reference placement point 1 ' represents the new pose of the part to be grabbed at the photographing position.
It should be noted that important nodes of the robot deviation rectifying program are shown in fig. 8 below. Model in FIG. 8 is the pose of the Model; p 1 And P 2 Respectively representing the pose of the part in the captured point cloud with respect to the model, M 1 And M 2 Respectively represent corresponding transformation matrices, where M 2 Derived from the fine registration; e represents external parameters of the camera and is used for converting the position pose in the point cloud into the position pose under the robot coordinate system; r is 1 、R 2 Reference photographing point and discharging point, R, representing robot 1 '、R 2 ' respectively indicating a photographing position and a placing position after the robot grabs the part to be grabbed; o is 1 Indicating the reference pose of the part when the robot walks to the reference photographing point, O 2 Indicating the reference pose of the part when the robot walks to the reference placement point, O 1 ' represents the new pose of the part to be grabbed at the photographing position; t is a unit of 1 ~T 7 And (4) representing the conversion relation between different poses.
From a Model to O 1 ' position transformation can result in:
T 4 ×E s ×Model=E n ×M 2 ×Model;
the following can be obtained:
Figure BDF0000017937350000072
Figure BDF0000017937350000073
from R 1 To O 1 ' position transformation can result in:
T 4 ×T 2 ×R 1 =O 1 '=T 3 ×T 1 ×R 1
i.e. T 3 =T 4 ×T 2 ×T 1 -1
From R 1 To O 2 The position transformation may result in:
T 5 ×T 2 ×R 1 =O 2 =T 6 ×T 3 ×R 1
T 5 =T 6 ×T 3 ×T 1 ×T 2 -1
will T 3 Substituting the formula into the formula to obtain:
T 5 =T 6 ×T 4
T 6 =T 5 ×T 4 -1
will T 4 -1 Substituting the formula to obtain:
Figure BDF0000017937350000081
the position conversion from R1 'to R2' can obtain:
Figure BDF0000017937350000082
that is to say that the first and second electrodes,
Figure BDF0000017937350000083
according to the formula, the placing position R of the part to be grabbed, which is grabbed by the robot, can be calculated according to the known quantity 2 ', through R 2 ' robot coordinates can be calculated in reverse.
In an actual working environment, the robot obtains the coordinates of the placement positions through calculation, and the coordinates of the standard placement positions of the robot are compared to correct the deviation, so that the robot reaches the accurate placement positions, and parts are placed correctly.
It should be understood that the robot 3D vision grasping method described above may be applied to a robot, wherein the robot is schematically illustrated in fig. 9-10.
In conclusion, the robot can correctly place parts when accurately reaching the standard placement position, and technical support is provided for intelligent manufacturing of production lines in various industries.
The embodiment of the application further provides a robot 3D vision system, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of the robot 3D vision grasping method or the steps of the robot 3D vision deviation rectifying method.
The robot 3D vision system can realize each embodiment of the robot 3D vision grabbing method or each embodiment of the robot 3D vision deviation rectifying method, and can achieve the same beneficial effects, and the detailed description is omitted here.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations can be devised by those skilled in the art in light of the above teachings. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (5)

1. A robot 3D vision grabbing method is characterized by comprising the following steps:
acquiring first actual point cloud data of a part to be grabbed at a standard photographing position of a robot;
preprocessing the first actual point cloud data to obtain first scene point cloud data;
roughly matching the first scene point cloud data with a first preset model point cloud to obtain a plurality of candidate poses of the part to be grabbed;
performing fine matching on the candidate poses and poses in the first preset model point cloud to obtain a transformation matrix between the part to be grabbed and the model part;
calculating the coordinates of the positions of the parts to be grabbed according to the transformation matrix and the positions of the model parts, and grabbing the parts to be grabbed based on the coordinates of the positions to be grabbed;
after a part to be grabbed is grabbed, acquiring second actual point cloud data shot by a deviation correcting camera, wherein the second actual point cloud data comprises a first position relation between a robot gripper and the part to be grabbed;
preprocessing the second actual point cloud data to obtain second scene point cloud data;
matching the second scene point cloud data with a second preset model point cloud, wherein the second preset model point cloud comprises a second position relation between the robot hand grip and the model part;
determining deviation information between the first position relation and the second position relation under the condition that the first position relation and the second position relation have deviation, wherein the deviation information comprises position movement information between second scene point cloud data and a second preset model point cloud;
calculating the placing position coordinates of the part to be grabbed according to the deviation information and the placing position coordinates of the model part;
the coordinate R of the position to be grabbed 2 Satisfies the following relation:
R 2 =R p2 ×E n ×M 2 ×M 1 -1 ×E s -1 ×R p1 -1 ×R 1
in the formula, R p1 A reference photographing point of the robot is represented, the reference photographing point of the robot is a photographing point of a standard photographing position of the robot, R p2 Representing the photographing position of the part to be grasped, E s Camera external parameters expressed as a standard model, E n The representation refers to camera external parameters when a part to be grabbed is grabbed, P1 represents the pose of a model part in the captured point cloud relative to the model, P2 represents the pose of the part to be grabbed in the captured point cloud relative to the model, M 1 Representing the pose transformation matrix between P1 and the model, M 2 Representing the pose transformation matrix, R, between P2 and the model 1 Representing the capture position coordinates of the model part, where M 2 Derived from the fine registration;
the preprocessing the first actual point cloud data to obtain target point cloud data comprises the following steps:
filtering the first actual point cloud data, and performing down-sampling processing on the result of the filtering processing to obtain target point cloud data;
the rough matching of the first scene point cloud data and a first preset model point cloud is carried out to obtain a plurality of candidate poses of the part to be grabbed, and the rough matching comprises the following steps:
calculating the global point pair characteristics of the first scene point cloud data, and establishing a hash table as a global model of the target point cloud data by taking the characteristics as keys and the point pairs as values;
and carrying out local matching on the global model of the first scene point cloud data and a first preset model point cloud to obtain a plurality of candidate poses.
2. The method for 3D vision capture by robot of claim 1, wherein the filtering the first actual point cloud data comprises:
establishing a target point cloud coordinate space system, setting a channel based on the target point cloud coordinate system, and removing points of the picture point cloud data outside the channel range to obtain a first filtering result;
calculating the average distance from each point of the first filtering result to all the adjacent points of the first filtering result, and removing the points with the average distance out of the preset standard range from the first filtering result to obtain a second filtering result;
dividing the second filtering result into a plurality of regions, designating a seed point as a starting point of growth for each region, then comparing the points of adjacent regions of the seed points with the seed points, calculating the similarity, combining the points with the similarity exceeding a threshold value into the same region, and iterating until the points which do not meet the condition are combined into the region to obtain a third filtering result;
and eliminating the point clouds which belong to the range outside the preset three-dimensional box range in the third filtering result to obtain a fourth filtering result.
3. The robotic 3D visual capture method of claim 1, wherein the down-sampling comprises any one of uniform down-sampling, voxel down-sampling, curvature down-sampling, and poisson disk sampling.
4. The robot 3D vision grabbing method of claim 1, wherein a placing position coordinate R of a part to be corrected is 2 ', satisfies the following relation:
Figure FDF0000017937340000021
in the formula, R 1 Indicating the reference photographing point of the robot, R 2 Indicating a reference discharge point, R, of the robot 1 ' indicating the photographing position after the robot has grabbed the part to be grabbed, R 2 ' denotes a placing position after the robot grabs the part to be grabbed.
5. A robotic 3D vision system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of the preceding claims 1 to 4 when executing the computer program.
CN202111560940.2A 2021-12-20 2021-12-20 Robot 3D vision grabbing method and system Active CN113927606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111560940.2A CN113927606B (en) 2021-12-20 2021-12-20 Robot 3D vision grabbing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111560940.2A CN113927606B (en) 2021-12-20 2021-12-20 Robot 3D vision grabbing method and system

Publications (2)

Publication Number Publication Date
CN113927606A CN113927606A (en) 2022-01-14
CN113927606B true CN113927606B (en) 2022-10-14

Family

ID=79289255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111560940.2A Active CN113927606B (en) 2021-12-20 2021-12-20 Robot 3D vision grabbing method and system

Country Status (1)

Country Link
CN (1) CN113927606B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114939891B (en) * 2022-06-28 2024-03-19 上海仙工智能科技有限公司 3D grabbing method and system for composite robot based on object plane characteristics

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976353B (en) * 2016-04-14 2020-01-24 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN110160446A (en) * 2019-06-17 2019-08-23 珠海格力智能装备有限公司 Localization method, device, storage medium and the system of material assembly
CN110340891B (en) * 2019-07-11 2022-05-24 河海大学常州校区 Mechanical arm positioning and grabbing system and method based on point cloud template matching technology
CN112476434B (en) * 2020-11-24 2021-12-28 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112509063A (en) * 2020-12-21 2021-03-16 中国矿业大学 Mechanical arm grabbing system and method based on edge feature matching
CN113793383A (en) * 2021-08-24 2021-12-14 江西省智能产业技术创新研究院 3D visual identification taking and placing system and method

Also Published As

Publication number Publication date
CN113927606A (en) 2022-01-14

Similar Documents

Publication Publication Date Title
CN107886528B (en) Distribution line operation scene three-dimensional reconstruction method based on point cloud
CN109514133B (en) 3D curve welding seam autonomous teaching method of welding robot based on line structure light perception
CN111251295B (en) Visual mechanical arm grabbing method and device applied to parameterized parts
CN110223345B (en) Point cloud-based distribution line operation object pose estimation method
CN112070818A (en) Robot disordered grabbing method and system based on machine vision and storage medium
CN112669385B (en) Industrial robot part identification and pose estimation method based on three-dimensional point cloud features
JP5480667B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, program
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN111243017A (en) Intelligent robot grabbing method based on 3D vision
CN110065068B (en) Robot assembly operation demonstration programming method and device based on reverse engineering
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN113284179B (en) Robot multi-object sorting method based on deep learning
CN113781561B (en) Target pose estimation method based on self-adaptive Gaussian weight quick point feature histogram
CN114355953B (en) High-precision control method and system of multi-axis servo system based on machine vision
CN112883984B (en) Mechanical arm grabbing system and method based on feature matching
CN110909644A (en) Method and system for adjusting grabbing posture of mechanical arm end effector based on reinforcement learning
CN115213896A (en) Object grabbing method, system and equipment based on mechanical arm and storage medium
CN113034600A (en) Non-texture planar structure industrial part identification and 6D pose estimation method based on template matching
CN111360821A (en) Picking control method, device and equipment and computer scale storage medium
CN113927606B (en) Robot 3D vision grabbing method and system
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
CN114742883A (en) Automatic assembly method and system based on plane type workpiece positioning algorithm
CN113963129A (en) Point cloud-based ship small component template matching and online identification method
CN112338922B (en) Five-axis mechanical arm grabbing and placing method and related device
CN113822946B (en) Mechanical arm grabbing method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant