CN114299039B - Robot and collision detection device and method thereof - Google Patents

Robot and collision detection device and method thereof Download PDF

Info

Publication number
CN114299039B
CN114299039B CN202111655419.7A CN202111655419A CN114299039B CN 114299039 B CN114299039 B CN 114299039B CN 202111655419 A CN202111655419 A CN 202111655419A CN 114299039 B CN114299039 B CN 114299039B
Authority
CN
China
Prior art keywords
robot
point cloud
dimensional point
cloud model
collision detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111655419.7A
Other languages
Chinese (zh)
Other versions
CN114299039A (en
Inventor
林义忠
谢震鹏
易雨晴
杜柳明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi University
Original Assignee
Guangxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi University filed Critical Guangxi University
Priority to CN202111655419.7A priority Critical patent/CN114299039B/en
Publication of CN114299039A publication Critical patent/CN114299039A/en
Application granted granted Critical
Publication of CN114299039B publication Critical patent/CN114299039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention discloses a robot and a collision detection device and method thereof, and belongs to the technical field of robot collision detection. The collision detecting device includes: the information acquisition module is used for acquiring three-dimensional point cloud information of objects and people in a real scene around the robot; the trajectory planning module is used for planning the motion trajectory of the robot end effector; the position acquisition module is used for acquiring the pose of each joint of the robot; the three-dimensional mapping module is used for constructing a robot, a real scene around the robot, a three-dimensional point cloud map of a person and the like; and the collision detection module is used for detecting whether the robot end effector collides with objects and people in the surrounding real scene in the updated three-dimensional point cloud map and transmitting a detection result. The invention can reduce the error and time delay caused by the continuous movement of the robot when collision detection is carried out, and simultaneously, the invention also reduces the data volume for collision detection and improves the detection speed.

Description

Robot and collision detection device and method thereof
Technical Field
The invention relates to the technical field of robot collision detection, in particular to a robot and a collision detection device and method thereof.
Background
At present, a robot needs to cooperate with a user to complete a plurality of tasks, so the user can frequently use the cooperative robot. In the process of executing a task, the robot needs to be capable of detecting whether the robot collides with people or objects around the robot, and a proper strategy is adopted to eliminate the collision, so that the safety of people is guaranteed, and the objects are prevented from being damaged by collision.
The existing robot collision detection is mostly carried out by a torque sensor, a proximity sensor and the like, but has the problem of high cost. The detection method based on vision has the problems of large calculated amount, poor real-time performance and the like, and the method for collision detection by constructing a map also has the problems of large calculated amount or simple detection method and insufficient precision.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a robot and a collision detection apparatus and method thereof, in which a kinect camera mounted on a forearm near an end effector of the robot is used to obtain three-dimensional point cloud information of objects and people around the robot, a corresponding three-dimensional point cloud model is established for the robot according to D-H parameters, a three-dimensional point cloud map is established in the same coordinate system, and a robot position acquisition method is provided for position acquisition, and a collision detection method for reducing data volume is provided, thereby improving detection speed.
In order to achieve the purpose, the invention adopts the technical scheme that:
a collision detecting apparatus of a robot, comprising:
the information acquisition module is used for acquiring three-dimensional point cloud information of objects and people in a real scene around the robot;
the trajectory planning module is used for planning the motion trajectory of the robot end effector;
the position acquisition module is used for solving variables of each joint of the robot at the next moment according to the motion track of the robot end effector; the position and the attitude of each connecting rod of the robot at the next moment are obtained by applying the homogeneous transformation matrix and according to the variables of each joint of the robot at the next moment;
the three-dimensional mapping module is used for establishing a first three-dimensional point cloud model and a second three-dimensional point cloud model, the first three-dimensional point cloud model is a three-dimensional point cloud model of the robot, the second three-dimensional point cloud model is a three-dimensional point cloud model established according to three-dimensional point cloud information of objects and people in a real scene around the robot, and then the first three-dimensional point cloud model and the second three-dimensional point cloud model are fused to establish a real scene around the robot and a three-dimensional point cloud map of the people; updating the first three-dimensional point cloud model according to the pose of each connecting rod of the robot at different moments, and updating the second three-dimensional point cloud model according to the three-dimensional point cloud information conversion acquired by the information acquisition module so as to obtain an updated three-dimensional point cloud map; and
and the collision detection module is used for detecting whether the robot end effector collides with objects and people in the surrounding real scene in the updated three-dimensional point cloud map and transmitting a detection result.
Further, the collision detection device of the robot further comprises an obstacle avoidance module, wherein the obstacle avoidance module is used for receiving the conveyed detection result and formulating a response mode of the robot after collision detection according to the detection result.
Further, the information acquisition module comprises a depth camera and an image processing module; the depth camera is arranged on a small arm close to the robot end effector and used for collecting the images of objects and people in the real scene around the end effector in the motion process of the robot; the image processing module is arranged on the robot and used for carrying out image preprocessing on the image collected by the depth camera, then classifying objects and people in the image and marking pixel points belonging to different categories.
The position obtaining module is further configured to compare encoder information of each joint of the robot with each joint variable of the robot at the next time obtained by solving at the next time to obtain a deviation value, combine the deviation value with each joint variable of the robot at the next time obtained by solving to obtain each corrected joint variable of the robot at the next time, and obtain a pose of each connecting rod of the robot at the next time by applying the homogeneous transformation matrix and according to each corrected joint variable of the robot at the next time.
Further, the first three-dimensional point cloud model is a three-dimensional point cloud model established for the robot according to a D-H method, and the second three-dimensional point cloud model is a three-dimensional point cloud model obtained by converting three-dimensional point cloud information of objects and people in a real scene around the robot, which is obtained by processing through the image processing module; the step of fusing the first three-dimensional point cloud model and the second three-dimensional point cloud model is to put the first three-dimensional point cloud model and the second three-dimensional point cloud model under the same coordinate system and fuse the first three-dimensional point cloud model and the second three-dimensional point cloud model into a three-dimensional point cloud map containing the robot and objects and people in the real scene around the robot.
Furthermore, the three-dimensional mapping module further comprises a spherical bounding box established for the first three-dimensional point cloud model and a spherical bounding box established for the second three-dimensional point cloud model.
The invention also provides a collision detection method of the robot (namely a method for detecting collision by adopting the collision detection device), which comprises the following steps:
(1) information acquisition: collecting three-dimensional point cloud information of objects and people in a real scene around the robot;
(2) planning a track: planning a motion track of the robot end effector;
(3) position acquisition: solving variables of each joint of the robot at the next moment according to the motion track of the robot end effector; the position and pose of each connecting rod of the robot at the next moment are obtained by applying the homogeneous transformation matrix and according to the variables of each joint of the robot at the next moment; (4) three-dimensional mapping: establishing a three-dimensional point cloud model of the robot, namely a first three-dimensional point cloud model, establishing a three-dimensional point cloud model according to objects in a real scene around the robot and three-dimensional point cloud information of the person, namely a second three-dimensional point cloud model, and fusing the two three-dimensional point cloud models to construct a three-dimensional point cloud map of the robot, the objects around the robot and the person; updating the first three-dimensional point cloud model according to the pose of each connecting rod of the robot at different moments, and updating the second three-dimensional point cloud module according to the change of the three-dimensional point cloud information acquired by the information acquisition module, so as to obtain an updated three-dimensional point cloud map;
(5) collision detection: detecting whether the robot end effector collides with objects or people in the surrounding real scene in the updated three-dimensional point cloud map, and transmitting a detection result;
(6) obstacle avoidance: and formulating a response mode of the robot after collision detection according to the conveyed detection result.
Further, the step (4) further comprises the step of respectively establishing a spherical bounding box of the three-dimensional point cloud model of the robot end effector, objects in a real scene around the robot and a spherical bounding box of the three-dimensional point cloud model of the person on the three-dimensional point cloud map.
Further, the specific steps of detecting whether the robot end effector collides with an object or a person in the surrounding real scene in step (5) are as follows:
51) obtaining the current object around the robot end effector according to shooting in the depth camera, and confirming that the collision detection object is an object i;
52) let the spherical center coordinate of the object i spherical bounding box be (x) oi ,y oi ,z oi ) Radius R i The spherical center of the spherical bounding box of the robot end effector is (x) oj ,y oj ,z oj ) Radius is R j (ii) a The point cloud coordinates of object i are noted as (x) i ,y i ,z i ) (ii) a The point cloud coordinates of the robot end effector are recorded as (x) j ,y j ,z j ) (ii) a And performing collision detection on the spherical bounding boxes according to the spherical center distance D of the two spherical bounding boxes:
judgment of
Figure GDA0003711364920000031
Whether the judgment result is true, if not, collision does not occur, and the operation enters 59); if true, a collision may occur, continuing 53);
53) taking a cuboid bounding box area containing the intersection part at the central point of the intersection part of the two spherical bounding boxes, recording the cuboid bounding box area as a detection area, and performing collision detection on an object i in the detection area and a point of a robot end effector;
54) performing collision detection on points in the detection area according to a slicing method, and selecting tangent planes from the central position of the cuboid detection area to the upper side and the lower side for detection, wherein the tangent planes are planes and the coordinates of the points on the tangent planes are the same; first, two rectangular bounding boxes parallel to X, Y axis and respectively enclosing the point of the object i and the point of the robot end effector are established for the point on the tangent plane, and the characteristic point coordinates of the two rectangular bounding boxes are respectively (x) i,min ,y i,min )、(x i,max ,y i,max )、(x j,min ,y j,min ) And (x) j,max ,y j,max ) (ii) a Firstly, judging whether the bounding boxes of the two rectangular bounding boxes intersect, namely y i,min >y j,max Or y j,min >y i,max If the two tangent planes are not true, if one tangent plane is true, the point of the tangent plane is not collided, and the next tangent plane is continuously detected; if none is true, continue to detect x i,min >x j,max Or x j,min >x i,max Whether or not to determine whether or not to performIf one is established, the point of the tangent plane is not collided, and the next tangent plane is continuously detected; continue 55 if none is true);
55) taking points of a three-dimensional point cloud model corresponding to an object i in an intersection area of the two rectangular bounding boxes and points of the three-dimensional point cloud model corresponding to the robot end effector for collision detection; get y i =y j Are compared, if x oj >x oi Then go to step 56); if x oj <x oi Then go to step 57);
56) if x j >x i If the point does not collide, the collision detection of the next point is carried out; if x j ≤x i If so, sending a collision detection result to an obstacle avoidance module;
57) if x i >x j If the point does not collide, the collision detection of the next point is carried out; if x i ≤x j If so, collision occurs, and a collision detection result is sent to the obstacle avoidance module;
58) and (5) after the collision detection is finished, feeding back a detection result.
The invention also provides a robot, which comprises the collision detection device of the robot.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
according to the invention, through the arrangement of the three-dimensional mapping module, the position acquisition module, the obstacle avoidance module and the like, the position acquisition method of each joint position of the robot and the specific collision detection method of the collision detection module are provided, so that the error and time delay caused by the continuous motion of the robot during collision detection can be reduced, meanwhile, the data volume for collision detection is reduced, and the detection speed is improved. In addition, the invention does not need to detect collision by means of a torque sensor, a proximity sensor and the like, thereby reducing the cost.
Drawings
Fig. 1 is a block diagram schematically illustrating a collision detecting apparatus of a robot according to an embodiment of the present invention;
fig. 2 is a flowchart of a collision detection method for a robot according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a method of detecting a collision of a robot according to an embodiment of the present invention;
in the figure, 1-an information acquisition module, 11-a depth camera, 12-an image processing module, 2-a track planning module, 3-a position acquisition module, 4-a three-dimensional mapping module, 41-a first three-dimensional point cloud model, 42-a second three-dimensional point cloud model, 5-a collision detection module and 6-an obstacle avoidance module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present. When a component is referred to as being "disposed on" another component, it can be directly on the other component or intervening components may also be present. The terms "vertical," "horizontal," "left," "right," and the like are used herein for purposes of illustration only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Examples
Referring to fig. 1, a collision detecting apparatus for a robot includes:
the information acquisition module 1 is used for acquiring three-dimensional point cloud information of objects and people in a real scene around the robot;
the track planning module 2 is used for planning the motion track of the robot end effector;
the position acquisition module 3 is used for solving variables of each joint of the robot at the next moment according to the motion track of the robot end effector; the position and pose of each connecting rod of the robot at the next moment are obtained by applying the homogeneous transformation matrix and according to the variables of each joint of the robot at the next moment;
the three-dimensional mapping module 4 is configured to establish a first three-dimensional point cloud model 41 and a second three-dimensional point cloud model 42, where the first three-dimensional point cloud model 41 is a three-dimensional point cloud model of a robot, and the second three-dimensional point cloud model 42 is a three-dimensional point cloud model established according to three-dimensional point cloud information of objects and people in a real scene around the robot, and the establishing methods of the first three-dimensional point cloud model 41 and the second three-dimensional point cloud model 42 are both the prior art and are not repeated here. Then fusing the first three-dimensional point cloud model 41 and the second three-dimensional point cloud model 42 to construct a robot, a real scene around the robot and a three-dimensional point cloud map of the robot; updating the first three-dimensional point cloud model 41 according to the pose of each connecting rod of the robot at different moments, and updating the second three-dimensional point cloud model 42 according to the three-dimensional point cloud information conversion acquired by the information acquisition module, so as to obtain an updated three-dimensional point cloud map;
the collision detection module 5 is used for detecting whether the robot end effector collides with objects and people in the surrounding real scene in the updated three-dimensional point cloud map and transmitting a detection result; and
and the obstacle avoidance module 6 is used for receiving the conveyed detection result and setting a response mode of the robot after collision detection according to the detection result.
As a further preferred technical solution, the information acquisition module 1 includes a depth camera 11 and an image processing module 12; the depth camera 11 is mounted on a small arm of the robot close to the end effector of the robot, and is used for acquiring images of objects and people in a real scene around the end effector in the motion process of the robot. In this embodiment, the depth camera 11 is a kinect camera. The image processing module 12 is installed on the robot and configured to perform image preprocessing, such as denoising, on the image collected by the depth camera 11, then classify objects, people, and the like in the image, and mark pixel points belonging to different categories (which may be marked with numbers and characters).
As a further preferable technical solution, the first three-dimensional point cloud model 41 is a three-dimensional point cloud model established for the robot according to a D-H method, and the second three-dimensional point cloud model 42 is a three-dimensional point cloud model obtained by converting three-dimensional point cloud information of objects and people in a real scene around the robot, which is obtained by processing through an image processing module; the fusing of the first three-dimensional point cloud model 41 and the second three-dimensional point cloud model 42 is specifically to place the first three-dimensional point cloud model 41 and the second three-dimensional point cloud model 42 in the same coordinate system and fuse the models into a three-dimensional point cloud map containing the robot and objects and people in the real scene around the robot.
As a further preferred technical solution, the three-dimensional mapping module 4 further includes a spherical bounding box established for the first three-dimensional point cloud model 41 and a spherical bounding box established for the second three-dimensional point cloud model 42; the first three-dimensional point cloud model 41 is updated according to the pose of each connecting rod obtained by the motion track planning of the robot, and the second three-dimensional point cloud model 42 judges whether to update according to the image obtained by the depth camera 11; the two spherical bounding boxes are updated according to the updating of the first three-dimensional point cloud model 41 and the second three-dimensional point cloud model 42, respectively.
The invention also provides a collision detection method of the robot, as shown in fig. 2, comprising the following steps:
(1) information acquisition: collecting three-dimensional point cloud information of objects and people (such as operators) in a real scene around the robot; the method comprises the following specific steps:
the method comprises the steps of collecting images of objects and people in a real scene around the robot end effector in the robot motion process by adopting a kinect camera arranged on a small arm close to the robot end effector, carrying out image preprocessing such as denoising on the collected images, classifying the objects, people and the like in the images, and marking pixel points belonging to different categories (which can be marked by numbers and characters).
(2) Planning a track: planning the motion trail of the robot end effector by adopting the existing method;
(3) position acquisition: solving variables of each joint of the robot at the next moment by using inverse kinematics of the robot according to the motion trail of the end effector of the robot; the position and pose of each connecting rod of the robot at the next moment are obtained by applying the homogeneous transformation matrix and according to the variables of each joint of the robot at the next moment; the method specifically comprises the following steps:
at the time of starting to work at T0, solving variables of each joint of the robot at the time of T1 according to the motion trail of the end effector of the robot and the homogeneous transformation matrix obtained by the D-H method and the inverse kinematics of the robot
Figure GDA0003711364920000071
Homogeneous transformation matrix obtained by applying D-H method and variables of each joint of robot according to T1 time
Figure GDA0003711364920000072
The pose of each connecting rod of the robot at the time of T1 is obtained, so that the three-dimensional mapping module is obtained according to the time of T1
Figure GDA0003711364920000073
And updating the three-dimensional point cloud model of the robot by the pose of each connecting rod of the robot so as to obtain an updated three-dimensional point cloud map.
(4) Three-dimensional mapping: establishing a three-dimensional point cloud model of the robot, namely a first three-dimensional point cloud model 41, establishing a three-dimensional point cloud model according to three-dimensional point cloud information of objects and people in a real scene around the robot, namely a second three-dimensional point cloud model 42, and fusing the two three-dimensional point cloud models to establish a three-dimensional point cloud map of the robot, the objects around the robot and the people; updating the first three-dimensional point cloud model 41 according to the pose of each connecting rod of the robot at different moments, and updating the second three-dimensional point cloud module 42 according to the change of the three-dimensional point cloud information acquired by the information acquisition module 1, so as to obtain an updated three-dimensional point cloud map; and a spherical bounding box of the first three-dimensional point cloud model 41 and a spherical bounding box of the second three-dimensional point cloud model 42 are respectively established on the three-dimensional point cloud map.
(5) Collision detection: detecting whether the robot end effector collides with objects or people in the surrounding real scene or not in the updated three-dimensional point cloud map, and transmitting a detection result; the method specifically comprises the following steps:
further, as shown in fig. 3, the specific steps of detecting whether the robot end effector collides with an object or a person in the surrounding real scene in step (5) are as follows:
in the present embodiment, in order to improve the detection efficiency, only whether or not the object i near the robot end effector collides with the end effector is detected.
51) Obtaining the current object around the robot end effector according to shooting in the depth camera, and confirming that the collision detection object is an object i;
52) let the spherical center coordinate of the object i spherical bounding box be (x) oi ,y oi ,z oi ) Radius R i The spherical center of the spherical bounding box of the robot end effector is (x) oj ,y oj ,z oj ) Radius is R j (ii) a The point cloud coordinates of object i are noted as (x) i ,y i ,z i ) (ii) a The point cloud coordinates of the robot end effector are noted as (x) j ,y j ,z j ) (ii) a Judging whether collision occurs according to the sphere center distance D of the two spherical bounding boxes:
judgment of
Figure GDA0003711364920000074
Whether the judgment result is true, if not, collision does not occur, and the operation enters 59); if true, a collision may occur, continue 53);
53) taking a cuboid bounding box area containing the intersection part at the central point of the intersection part of the two spherical bounding boxes, recording the cuboid bounding box area as a detection area, and performing collision detection on an object i in the detection area and a point of a robot end effector;
54) performing collision detection on points in the detection area according to a slicing method, and selecting tangent planes from the central position of the cuboid detection area to the upper side and the lower side for detection, wherein the tangent planes are planes and the coordinates of the points on the tangent planes are the same; for the points on the tangent plane, two rectangular bounding boxes parallel to X, Y axis are established for respectively enclosing the point of the object i and the point of the robot end effector, and the characteristic point coordinates of the two rectangular bounding boxes are respectively (x) i,min ,y i,min )、(x i,max ,y i,max )、(x j,min ,y j,min ) And (x) j,max ,y j,max ) (ii) a Firstly, judging whether the bounding boxes of the two rectangular bounding boxes intersect, namely y i,min >y j,max Or y j,min >y i,max If the two tangent planes are not true, if one tangent plane is true, the point of the tangent plane is not collided, and the next tangent plane is continuously detected; if none is true, continue to detect x i,min >x j,max Or x j,min >x i,max If the two tangent planes are not true, if one tangent plane is true, the point of the tangent plane is not collided, and the next tangent plane is continuously detected; continue 55 if none is true);
55) taking points of a three-dimensional point cloud model corresponding to an object i in an intersection area of the two rectangular bounding boxes and points of the three-dimensional point cloud model corresponding to the robot end effector for collision detection; get y i =y j Is compared if x oj >x oi Then go to step 56); if x oj <x oi Then go to step 57);
56) if x j >x i If the point does not collide, the collision detection of the next point is carried out; if x j ≤x i If so, sending a collision detection result to an obstacle avoidance module;
57) if x i >x j If the point does not collide, the collision detection of the next point is carried out; if x i ≤x j If so, collision occurs, and a collision detection result is sent to the obstacle avoidance module;
58) and (5) after the collision detection is finished, feeding back a detection result.
(6) Obstacle avoidance: formulating a response mode of the robot after collision detection according to the conveyed detection result; the response mode comprises that when no collision exists, the movement is continued; when collision occurs and the speed is low, the robot decelerates and plans the track again to avoid collision; when there is a collision and the speed is fast, the robot is away from the collision target fast.
The invention further provides a robot corresponding to the embodiment. The robot provided by the present invention includes the collision detection apparatus for a robot proposed in the above embodiments, and the specific implementation thereof can refer to the above embodiments, and is not described herein again in order to avoid redundancy. It is noted that the robot according to the invention is a fixed industrial robot, i.e. the position of the robot cannot be moved, only the joints of the robot are movable.
In addition, in other embodiments, the position obtaining module 3 may be further configured to compare, at the next time, the encoder information of each joint of the robot with each joint variable of the robot at the next time obtained through solution to obtain a deviation value, combine the deviation value with each joint variable of the robot at the next time obtained through solution (similarly, according to the motion trajectory of the robot end effector, and according to a homogeneous transformation matrix obtained through a D-H method and inverse kinematics of the robot), obtain each joint variable of the robot at the next time after correction, and apply the homogeneous transformation matrix and obtain the pose of each connecting rod of the robot at the next time according to each joint variable of the robot at the next time after correction. The position acquisition method specifically comprises the following steps:
at the time of starting to work at T0, solving variables of each joint of the robot at the time of T1 according to the motion trail of the end effector of the robot and the homogeneous transformation matrix obtained by the D-H method and the inverse kinematics of the robot
Figure GDA0003711364920000091
Homogeneous transformation matrix obtained by applying D-H method and variables of each joint of robot according to T1 time
Figure GDA0003711364920000092
The pose of each connecting rod of the robot at the time of T1 is obtained, so that the three-dimensional mapping moduleObtained according to time T1
Figure GDA0003711364920000093
And updating the three-dimensional point cloud model of the robot by the pose of each connecting rod of the robot to obtain an updated three-dimensional point cloud map, and performing collision detection to complete position acquisition and collision detection at the time of T0. At time T1, the robot joint information θ is read from the encoder of the robot 1 And the information theta of each joint of the robot read out from the encoder 1 And each joint variable of the robot at the T1 moment obtained by solving
Figure GDA0003711364920000094
Obtaining deviation value by comparison
Figure GDA0003711364920000095
Deviation value delta 1 Combining the solved joint variables of the robot at the time T2
Figure GDA0003711364920000096
Obtaining the corrected variables of each joint of the robot at the T2 moment, applying a homogeneous transformation matrix and obtaining the pose of each connecting rod of the robot at the T2 moment according to the corrected variables of each joint of the robot at the T2 moment so that the three-dimensional mapping module obtains the pose of each connecting rod of the robot at the T2 moment again according to the T2 moment
Figure GDA0003711364920000097
And updating the three-dimensional point cloud model of the robot by the pose of each connecting rod of the robot to obtain an updated three-dimensional point cloud map, and performing collision detection to complete position acquisition and collision detection at the time of T1. And the position acquisition and the collision detection at the next moment are carried out by analogy.
The above description is for the purpose of illustrating the preferred embodiments of the present invention, but the present invention is not limited thereto, and all changes and modifications that can be made within the spirit of the present invention should be included in the scope of the present invention.

Claims (7)

1. A collision detecting apparatus of a robot, characterized by comprising:
the information acquisition module is used for acquiring three-dimensional point cloud information of objects and people in a real scene around the robot;
the trajectory planning module is used for planning the motion trajectory of the robot end effector;
the position acquisition module is used for solving variables of each joint of the robot at the next moment according to the motion track of the robot end effector; the position and pose of each connecting rod of the robot at the next moment are obtained by applying the homogeneous transformation matrix and according to the variables of each joint of the robot at the next moment;
the three-dimensional mapping module is used for establishing a first three-dimensional point cloud model and a second three-dimensional point cloud model, the first three-dimensional point cloud model is a three-dimensional point cloud model of the robot, the second three-dimensional point cloud model is a three-dimensional point cloud model which is established according to objects in a real scene around the robot and three-dimensional point cloud information of the person, and then the first three-dimensional point cloud model and the second three-dimensional point cloud model are fused to establish a three-dimensional point cloud map of the robot, the real scene around the robot and the person; updating the first three-dimensional point cloud model according to the pose of each connecting rod of the robot at different moments, and updating the second three-dimensional point cloud model according to the three-dimensional point cloud information conversion acquired by the information acquisition module so as to obtain an updated three-dimensional point cloud map; and
the collision detection module is used for detecting whether the robot end effector collides with objects and people in the surrounding real scene in the updated three-dimensional point cloud map and transmitting a detection result;
a collision detection method further comprising the collision detection apparatus, the collision detection method comprising the steps of:
(1) information acquisition: collecting three-dimensional point cloud information of objects and people in a real scene around the robot;
(2) planning a track: planning a motion track of a robot end effector;
(3) position acquisition: solving variables of each joint of the robot at the next moment according to the motion track of the robot end effector; the position and the attitude of each connecting rod of the robot at the next moment are obtained by applying the homogeneous transformation matrix and according to the variables of each joint of the robot at the next moment;
(4) three-dimensional mapping: establishing a three-dimensional point cloud model of the robot, namely a first three-dimensional point cloud model, establishing a three-dimensional point cloud model according to objects in a real scene around the robot and three-dimensional point cloud information of the person, namely a second three-dimensional point cloud model, and fusing the two three-dimensional point cloud models to construct a three-dimensional point cloud map of the robot, the objects around the robot and the person; updating the first three-dimensional point cloud model according to the pose of each connecting rod of the robot at different moments, and updating the second three-dimensional point cloud model according to the change of the three-dimensional point cloud information acquired by the information acquisition module, so as to obtain an updated three-dimensional point cloud map;
the step (4) further comprises the steps of respectively establishing a spherical bounding box of a three-dimensional point cloud model of the robot end effector, objects in a real scene around the robot and a spherical bounding box of a three-dimensional point cloud model of a person on the three-dimensional point cloud map;
(5) collision detection: detecting whether the robot end effector collides with objects or people in the surrounding real scene in the updated three-dimensional point cloud map, and transmitting a detection result;
the specific steps of detecting whether the robot end effector collides with an object or a person in a surrounding real scene in the step (5) are as follows:
51) obtaining the current object around the robot end effector according to shooting in the depth camera, and confirming that the collision detection object is an object i;
52) let the spherical center coordinate of the object i spherical bounding box be (x) oi ,y oi ,z oi ) Radius R i The spherical center of the spherical bounding box of the robot end effector is (x) oj ,y oj ,z oj ) Radius R j (ii) a The point cloud coordinates of object i are noted as (x) i ,y i ,z i ) (ii) a The point cloud coordinates of the robot end effector are noted as (x) j ,y j ,z j ) (ii) a Judging whether collision occurs according to the sphere center distance D of the two spherical bounding boxes:
judgment of
Figure FDA0003739772660000021
Whether the conditions are satisfied, if not, no collision occurs, and the process enters 58); if true, a collision may occur, continuing 53);
53) taking a cuboid bounding box area containing the intersection part at the central point of the intersection part of the two spherical bounding boxes, recording the cuboid bounding box area as a detection area, and performing collision detection on an object i in the detection area and a point of a robot end effector;
54) performing collision detection on points in the detection area according to a slicing method, selecting tangent planes from the central position of the cuboid detection area to the upper side and the lower side for detection, wherein the tangent planes are planes, and the z coordinates of the points on the tangent planes are the same; first, two rectangular bounding boxes parallel to X, Y axis and respectively enclosing the point of the object i and the point of the robot end effector are established for the point on the tangent plane, and the characteristic point coordinates of the two rectangular bounding boxes are respectively (x) i,min ,y i,min )、(x i,max ,y i,max )、(x j,min ,y j,min ) And (x) j,max ,y j,max ) (ii) a First, whether two rectangular bounding boxes intersect is judged, namely y i,min >y j,max Or y j,min >y i,max If one tangent plane is established, the point of the tangent plane is not collided, and the next tangent plane is continuously detected; if none is true, continue to detect x i,min >x j,max Or x j,min >x i,max If the two tangent planes are not true, if one tangent plane is true, the point of the tangent plane is not collided, and the next tangent plane is continuously detected; continue 55 if none is true);
55) taking points of a three-dimensional point cloud model corresponding to an object i in an intersection area of the two rectangular bounding boxes and points of the three-dimensional point cloud model corresponding to the robot end effector for collision detection; get y i =y j Is compared if x oj >x oi Then go to step 56); if x oj <x oi Then go to step 57);
56) if x j >x i If the point does not collide, the collision detection of the next point is performed(ii) a If x j ≤x i If so, collision occurs, and a collision detection result is sent to the obstacle avoidance module;
57) if x i >x j If the point does not collide, the collision detection of the next point is carried out; if x i ≤x j If so, collision occurs, and a collision detection result is sent to the obstacle avoidance module;
58) after collision detection is finished, feeding back a detection result;
(6) and (3) collision response: and formulating a response mode of the robot after collision detection according to the conveyed detection result.
2. The collision detection device according to claim 1, further comprising an obstacle avoidance module, wherein the obstacle avoidance module is configured to receive the transmitted detection result and formulate a response mode of the robot after collision detection according to the detection result.
3. The collision detection device according to claim 1, wherein the information acquisition module comprises a depth camera and an image processing module; the depth camera is arranged on a small arm close to the robot end effector and used for collecting the images of objects and people in the real scene around the end effector in the motion process of the robot; the image processing module is arranged on the robot and used for carrying out image preprocessing on the image collected by the depth camera, then classifying objects and people in the image and marking pixel points belonging to different categories.
4. The collision detecting device according to claim 1, wherein the position obtaining module is further configured to compare encoder information of each joint of the robot with each joint variable of the robot at the next time obtained by the solving at the next time to obtain a deviation value, obtain each joint variable of the robot at the next time after the correction by combining the deviation value with each joint variable of the robot at the next time obtained by the solving, apply the homogeneous transformation matrix, and obtain the pose of each link of the robot at the next time according to each joint variable of the robot at the next time after the correction.
5. The collision detection apparatus according to claim 3, wherein the first three-dimensional point cloud model is a three-dimensional point cloud model created for the robot according to a D-H method, and the second three-dimensional point cloud model is a three-dimensional point cloud model obtained by converting three-dimensional point cloud information of an object and a person in a real scene around the robot, which is obtained by processing by the image processing module; the fusion of the first three-dimensional point cloud model and the second three-dimensional point cloud model is specifically that the first three-dimensional point cloud model and the second three-dimensional point cloud model are placed under the same coordinate system and are fused into a three-dimensional point cloud map containing the robot, objects and people in the real scene around the robot.
6. The collision detection apparatus according to claim 4, wherein the three-dimensional mapping module further comprises a spherical bounding box established for the first three-dimensional point cloud model and a spherical bounding box established for the second three-dimensional point cloud model.
7. A robot characterized by comprising a collision detecting device of a robot according to any one of claims 1-5.
CN202111655419.7A 2021-12-30 2021-12-30 Robot and collision detection device and method thereof Active CN114299039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111655419.7A CN114299039B (en) 2021-12-30 2021-12-30 Robot and collision detection device and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111655419.7A CN114299039B (en) 2021-12-30 2021-12-30 Robot and collision detection device and method thereof

Publications (2)

Publication Number Publication Date
CN114299039A CN114299039A (en) 2022-04-08
CN114299039B true CN114299039B (en) 2022-08-19

Family

ID=80974191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111655419.7A Active CN114299039B (en) 2021-12-30 2021-12-30 Robot and collision detection device and method thereof

Country Status (1)

Country Link
CN (1) CN114299039B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116352756A (en) * 2022-11-25 2023-06-30 威凯检测技术有限公司 Obstacle avoidance function detection system and detection method for intelligent service robot in indoor scene
CN117162098B (en) * 2023-10-07 2024-05-03 合肥市普适数孪科技有限公司 Autonomous planning system and method for robot gesture in narrow space

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107341825A (en) * 2017-07-06 2017-11-10 西南科技大学 A kind of method for simplifying for large scene high-precision three-dimensional laser measurement cloud data
CN109540142B (en) * 2018-11-27 2021-04-06 达闼科技(北京)有限公司 Robot positioning navigation method and device, and computing equipment
CN110253570B (en) * 2019-05-27 2020-10-27 浙江工业大学 Vision-based man-machine safety system of industrial mechanical arm
CN111660295B (en) * 2020-05-28 2023-01-03 中国科学院宁波材料技术与工程研究所 Industrial robot absolute precision calibration system and calibration method
CN111982127A (en) * 2020-08-31 2020-11-24 华通科技有限公司 Lightweight-3D obstacle avoidance method

Also Published As

Publication number Publication date
CN114299039A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN110587600B (en) Point cloud-based autonomous path planning method for live working robot
CN110116407B (en) Flexible robot position and posture measuring method and device
CN114299039B (en) Robot and collision detection device and method thereof
Zhu et al. Online camera-lidar calibration with sensor semantic information
CN109202885B (en) Material carrying and moving composite robot
CN107914272B (en) Method for grabbing target object by seven-degree-of-freedom mechanical arm assembly
US11694432B2 (en) System and method for augmenting a visual output from a robotic device
Song et al. CAD-based pose estimation design for random bin picking using a RGB-D camera
CN110170995B (en) Robot rapid teaching method based on stereoscopic vision
US20120259462A1 (en) Information processing apparatus, control method thereof and storage medium
CN113276106B (en) Climbing robot space positioning method and space positioning system
CN111192307A (en) Self-adaptive deviation rectifying method based on laser cutting of three-dimensional part
CN111260649B (en) Close-range mechanical arm sensing and calibrating method
CN111151463A (en) Mechanical arm sorting and grabbing system and method based on 3D vision
CN114102585A (en) Article grabbing planning method and system
CN115213896A (en) Object grabbing method, system and equipment based on mechanical arm and storage medium
CN112975929A (en) Passenger plane charging socket identification positioning docking system and method based on multi-feature fusion
CN113961013A (en) Unmanned aerial vehicle path planning method based on RGB-D SLAM
CN116563491B (en) Digital twin scene modeling and calibration method
Ranjan et al. Identification and control of NAO humanoid robot to grasp an object using monocular vision
CN110722547B (en) Vision stabilization of mobile robot under model unknown dynamic scene
Subedi et al. Camera-lidar data fusion for autonomous mooring operation
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
Chen et al. A method for mobile robot obstacle avoidance based on stereo vision
CN115810188A (en) Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant