CN113808201A - Target object detection method and guided grabbing method - Google Patents
Target object detection method and guided grabbing method Download PDFInfo
- Publication number
- CN113808201A CN113808201A CN202110900383.8A CN202110900383A CN113808201A CN 113808201 A CN113808201 A CN 113808201A CN 202110900383 A CN202110900383 A CN 202110900383A CN 113808201 A CN113808201 A CN 113808201A
- Authority
- CN
- China
- Prior art keywords
- target object
- point
- point cloud
- calibration
- laser
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000003709 image segmentation Methods 0.000 claims abstract description 9
- 238000012216 screening Methods 0.000 claims abstract description 5
- 238000013519 translation Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000009616 inductively coupled plasma Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 235000005121 Sorbus torminalis Nutrition 0.000 claims description 4
- 244000152100 Sorbus torminalis Species 0.000 claims description 4
- 230000001788 irregular Effects 0.000 abstract description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000003466 welding Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 229910052573 porcelain Inorganic materials 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a target object detection method and a guided grabbing method, wherein the target object detection method comprises the following steps: step (1): performing combined calibration on a camera and a laser sensor, and performing hand-eye calibration on the robot; step (2): the robot respectively collects an image of a target object and laser point cloud, the image is segmented through an image segmentation model to obtain the target object, and a corresponding point cloud model of the target object in the space is obtained by screening from the laser point cloud in combination with the step (1); and (3): and (3) calculating by adopting a template matching algorithm to obtain the outline, the pose and the center of the point cloud model obtained in the step (2), namely detecting to obtain the target object. According to the invention, a depth camera system is used for high-precision modeling of the target object, so that the irregular target is accurately identified and guided to be grabbed, and the target object can be accurately identified and grabbed by the hot-line work robot in the operation process.
Description
Technical Field
The invention relates to the field of industrial automation, in particular to a target object detection method in a complex scene and a system calibration, pose estimation and mechanical arm grabbing method.
Background
In the live working field, the electric power workman needs to change objects such as arrester, vase, nut, power line, and the workman can face 10kV high tension electricity threat, the accident appears easily. Aiming at such scenes, the invention fully exerts the advantages of images in the aspects of object detection, identification and segmentation and the accurate distance measurement capability of the ToF laser radar in a three-dimensional space by calibrating the robot by hands and eyes, so that the charged working robot can accurately identify and grasp the target object in the operation process. In addition, the method can also be applied to other industries, such as the industrial automation fields of sorting, assembling, feeding, welding and the like, particularly the field of small target identification needing dangerous complex scenes so as to improve the accuracy and safety of the existing work.
At present, no scheme for visually guiding and grabbing small targets under complex scenes exists in the market.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a target object detection method and a guided grabbing method aiming at the scenes, and a robot is used for replacing manpower, so that the safety of electric workers can be protected in a dangerous environment, and the labor condition can be improved.
The technical scheme is as follows:
a target object detection method, comprising:
step (1): performing combined calibration on a camera and a laser sensor, and performing hand-eye calibration on the robot;
step (2): the robot respectively collects an image of a target object and laser point cloud, the image is segmented through an image segmentation model to obtain the target object, and a corresponding point cloud model of the target object in the space is obtained by screening from the laser point cloud in combination with the step (1);
and (3): and (3) calculating by adopting a template matching algorithm to obtain the outline, the pose and the center of the point cloud model obtained in the step (2), namely detecting to obtain the target object.
In the step (1), the calibration of the robot by hands and eyes is specifically as follows:
1) acquiring a rotation and translation relation between the camera and the tail end of the mechanical arm according to the camera mounting parameters;
2) fixing the tail end of the mechanical arm position, acquiring the pose of the mechanical arm position, and collecting laser point cloud of the calibration tool through a laser radar;
3) extracting a calibration point of a calibration tool for calibration, acquiring a rotation and translation relation between a laser radar coordinate system and the calibration tool by solving a PnP problem, and recording the terminal pose of the mechanical arm at the moment;
4) and (4) repeating the steps 2) and 3), obtaining the rotation and translation relations between the multiple groups of laser radar coordinate systems and the calibration tool and the corresponding end poses of the mechanical arm, and solving by using a Tsai-Lanz thesis classic two-step method to obtain the conversion relation between the end poses of the mechanical arm and the laser radar poses.
The calibration tool is a calibration plate movably mounted at the tail end of the mechanical arm.
And a plurality of round holes are arranged on the calibration plate to be used as calibration points.
The calibration plate is provided with grids or chequers, and the corresponding calibration points are grid spots or chequer angles.
In the step (2), the step of screening the laser point cloud to obtain a corresponding point cloud model of the target object in the space specifically comprises the following steps:
1) converting the points in the laser point cloud into an image coordinate system according to the step (1), and judging whether the points are one point on a target object; judging whether the point is located in an image mask obtained by image segmentation, and if the point is located in a mask area, indicating that the point is a point on a target object;
2) and storing all points on the target object in the laser point cloud into a new point set to obtain a point cloud model of the target object.
In the step (3), the contour, the pose and the center of the point cloud model obtained in the step (2) are calculated by adopting a template matching algorithm and specifically:
1) establishing a standard point cloud template base: scanning actual objects to obtain corresponding laser point clouds, splicing the laser point clouds to obtain a point cloud template and corresponding size and outline characteristics of the point cloud template, and storing the corresponding point cloud template and the corresponding characteristics of the point cloud template to form a standard point cloud template library;
2) projecting the acquired intensity values of the laser point cloud of the target object on three orthogonal views of a 3D space to obtain three target gray level images; simultaneously acquiring a point cloud module of a corresponding object from a standard point cloud template library, and respectively projecting intensity values of the point cloud module on three orthogonal views of a 3D space to obtain three standard gray level images;
3) respectively generating BRIEF key point descriptor characteristic point pairs in the three target gray level images and the three standard gray level images, and converting the BRIEF key point descriptor characteristic point pairs to laser point clouds of a target object to obtain point pair mapping of the three groups of target point clouds and a standard point cloud template;
4) and matching by using a RANSAC algorithm to obtain the contour and the pose of the target object.
Further comprising the precise matching step: and (4) taking the obtained contour and the posture of the target object as initial values, optimizing the matching result by using an ICP (inductively coupled plasma) algorithm, and generating the contour, the posture and the center of the target object which is finally matched accurately.
The RANSAC algorithm specifically comprises the following steps:
41) randomly selecting four pairs of feature point pairs on the target point cloud to construct a minimum envelope sphere, and recording the minimum envelope sphere as a model M, wherein the minimum envelope sphere contains all three feature point pairs converted by standard gray level images; calculating the centroid of the minimum envelope sphere, and calculating the average distance from the centroid to the characteristic point in the envelope sphere as a matching error threshold;
42) calculating projection errors between all points in the target point cloud and the model M, and if the projection errors between a certain point and the model M are smaller than a matching error threshold value, adding the point into the envelope sphere to obtain an inner point set;
43) if the number of the inner points in the current inner point set is larger than that of the inner points in the previous optimal inner point set, updating the current inner point set into the optimal inner point set, and updating the iteration times;
44) and repeating 41), 42) and 43), if the iteration times are more than K, stopping iteration to obtain an optimal inner point set, and further obtaining the matching features of the target point cloud.
A guided grasping method, comprising the steps of:
(1) acquiring the outline, the pose and the center of the target object by adopting the target object detection method;
(2) converting the contour and the pose of the target object into a mechanical arm coordinate system according to the conversion relation between the end pose of the mechanical arm and the pose of the laser radar obtained by the hand-eye calibration in the step (1);
(3) calculating the size of a gripper opening at the tail end of the mechanical arm according to the contour of the target object, controlling the center of the gripper at the tail end of the mechanical arm to move to the optimal position according to the central point of the target object, and calculating the orientation of the gripper opening at the tail end of the mechanical arm according to the pose of the target object;
(4) and (4) performing motion planning according to the step (3) and controlling the mechanical arm to move to a target pose, so as to realize grabbing work.
Has the advantages that: according to the invention, through hand-eye calibration of the robot, the advantages of images in the aspects of object detection, identification and segmentation and the accurate distance measurement capability of the laser radar in a three-dimensional space are fully exerted, and a depth camera system is used in the system to perform high-precision modeling on a target object, so that accurate 3D visual guidance and grabbing on small irregular targets are realized, an electric working robot can accurately identify and grab the target object in the operation process, under the condition of lower hardware cost, accurate target automatic extraction can be realized, the robot replaces manpower, the safety of electric workers can be protected in a dangerous environment, and the labor condition is improved. In addition, the method can also be applied to other industries, such as the industrial automation fields of sorting, assembling, feeding, welding and the like, particularly the field of small target identification needing dangerous complex scenes so as to improve the accuracy and safety of the existing work.
Drawings
Fig. 1 is a flowchart of a target object detection method according to the present invention.
FIG. 2 is a flow chart of the camera and laser sensor joint calibration of the present invention.
Fig. 3 is a schematic view of the hand-eye calibration tool of the present invention.
FIG. 4 is a flow chart of the point cloud processing of the present invention.
Fig. 5 is a flowchart of the robot arm motion planning of the present invention.
Detailed Description
The invention is further elucidated with reference to the drawings and the embodiments.
Fig. 1 is a flowchart of a target object detection method according to the present invention. As shown in fig. 1, the target object detection method of the present invention includes the steps of:
(1) carrying out combined calibration on a camera and a laser sensor;
fig. 2 is a flowchart of the camera and laser sensor joint calibration of the present invention, and as shown in fig. 2, the camera and laser sensor joint calibration of the present invention is specifically as follows:
(11) before calibration, the positions of a camera and a laser radar are fixed, in the calibration process, after the connection of an external device is checked to be free of faults, a calibration plate with fixed specifications is fixedly installed at the tail end of a mechanical arm, images and laser point clouds are collected through the camera and the laser radar respectively, the images are converted into a gray level image with a fixed pixel value range, and the intensity information of the laser point clouds is converted into a gray level image with a fixed pixel value range;
(12) extracting grid spots or checkerboard angular points on a calibration plate for calibration, and acquiring the rotation and translation relation between a camera coordinate system and a laser radar coordinate system;
(13) transforming the points in the laser point cloud into an image coordinate system according to the transformation relation between the camera coordinate system and the laser radar coordinate system obtained in the step (12) and the imaging principle of the camera, and checking the rationality of the difference range according to the error between the coordinates transformed into the image coordinate system by the points in the laser point cloud and the coordinates of the corresponding points in the image coordinate system; wherein the error is defined as the pixel difference between the coordinate of the point in the laser point cloud transformed to the image coordinate system and the corresponding point in the image coordinate system in the x direction or the y direction, if the pixel difference of the point in the laser point cloud and the corresponding point in the image coordinate system in the x direction or the y direction is not more than 1 pixel, the calibration is considered to be in the error range; otherwise, recalibrating is needed;
in the calibration result, the joint calibration result of the camera and the laser radar is stored in a homogeneous coordinate mode and is stored in a specified path for use by other follow-up modules;
(2) calibrating the hands and eyes of the robot; the invention specifically relates to a method for calibrating the hands and eyes of a robot, which comprises the following steps:
(21) before calibration, the positions of the camera and the laser radar are fixed, and the rotation and translation relation between the camera and the tail end of the mechanical arm is obtained according to the installation parameters of the camera;
(22) in the calibration process, after the connection of the peripheral equipment is checked to be faultless, the tail end of the position of the mechanical arm is fixed, the coordinate information of each mechanical arm joint is read, after the mechanical arm stops stabilizing, the laser point cloud of the calibration tool is collected through a laser radar, and the depth information of the laser point cloud is converted into a gray scale image in a fixed pixel value range;
(23) extracting a calibration point of a calibration tool for calibration, solving a PnP problem, obtaining a rotation and translation relation between a laser radar coordinate system and the calibration tool, and recording the pose of the tail end of the mechanical arm at the moment;
(24) repeating the steps (22) to (23), obtaining the rotational translation relation between a plurality of groups of laser radar coordinate systems and the calibration tool and the corresponding end pose of the mechanical arm, and solving by using a Tsai-Lanz thesis classic two-step method to obtain the conversion relation between the end pose of the mechanical arm and the pose of the laser radar;
and after the calibration is finished, the calibration result is stored in a file under a specified path or in a runtime environment for use by other follow-up modules.
As shown in fig. 3, the calibration tool of the present invention is a calibration plate movably mounted at the end of the mechanical arm, and a plurality of round holes, or a grid or a checkerboard is formed thereon, and the corresponding calibration points are round holes, grid spots or checkerboard angles.
(3) In a working scene, acquiring an image and a laser point cloud by the robot through a camera and a laser radar respectively, segmenting the image according to a preset image segmentation model to obtain a target object, and combining the step (1) to obtain a corresponding point cloud model of the segmented target object in the space;
(31) when the robot reaches a working area, the robot acquires an image and a laser point cloud through a camera and a laser radar respectively, introduces a trained deep neural network model which can be used for image segmentation from a specified path as an image segmentation model to segment the image to obtain a target object mask, and stores the segmented target object mask in the specified path of the system;
(32) converting the points in the laser point cloud into an image coordinate system according to the step (1), and judging whether the points are one point on a target object; specifically, whether the point is located in an image mask obtained by image segmentation is judged, and if the point is located in a mask area, the point is a point on a target object;
(33) and storing all points on the target object in the laser point cloud into a new point set to obtain a point cloud model of the target object.
(4) Calculating the contour size, the object center and the pose of the point cloud model according to the point cloud model corresponding to the target object obtained in the step (3);
in order to accelerate the matching speed and simultaneously consider the requirements of the industry on the reliability and stability of target grabbing, the template matching method is carried out by adopting a scheme of combining a 2D image and a 3D point cloud:
(41) the method comprises the following steps of firstly scanning actual objects (such as porcelain bottles, lightning arresters, nuts and the like) by using a laser radar, then denoising the scanned laser point clouds, splicing the denoised laser point clouds, calculating the size, outline and characteristics of a rotating shaft of the spliced point cloud template, and finally storing the point cloud template and the corresponding characteristics in a point cloud library to form a standard point cloud template library;
(42) denoising the laser point cloud acquired in the step (3), and projecting the intensity value of the laser point cloud on three orthogonal views of a 3D space to obtain three target gray level images; simultaneously, acquiring a point cloud template corresponding to an object from a standard point cloud template library, and respectively projecting intensity values of the point cloud template onto three orthogonal views of a 3D space to obtain three standard gray level images;
(43) obtaining the feature points of the three target gray level images obtained in the step (42) through a feature point detection algorithm, and generating BRIEF key point descriptor feature point pairs in the three target gray level images according to the feature points; similarly, generating BRIEF key point descriptor characteristic point pairs in the three standard gray level images;
(44) converting the characteristic point pairs of the three target gray level images and the three standard gray level images to the laser point cloud acquired in the step (3) according to the steps (1) and (2), and constructing a 2D-3D mapping relation to obtain the mapping relation between the characteristic point pairs of the three target gray level images and the corresponding characteristic point pairs of the standard gray level images, namely the point pair mapping of the three groups of target point clouds and the standard point cloud template;
(45) performing rough matching by using an RANSAC algorithm according to the point pair mapping of the three groups of target point clouds obtained in the step (44) and the standard point cloud template to obtain three pairs of characteristic points on the target point clouds as point cloud characteristics and obtain the outline and the pose of the rough matching of the target object; wherein, the iteration times K is the number of the characteristic point pairs;
the rough matching by using the RANSAC algorithm specifically comprises the following steps:
1) randomly selecting four pairs of feature points on the target point cloud to construct a minimum envelope sphere, and marking the minimum envelope sphere as a model M; wherein, the minimum enveloping sphere contains all the feature point pairs transformed by the three standard gray level images;
2) calculating the centroid of the minimum envelope sphere, and calculating the average distance from the centroid to the characteristic point in the envelope sphere as a matching error threshold; calculating projection errors between all points in the target point cloud and the model M, and if the projection errors between a certain point and the model M are smaller than a matching error threshold value, adding the point into the envelope sphere to obtain an inner point set;
3) if the number of the inner points in the current inner point set is larger than that of the inner points in the previous optimal inner point set, updating the current inner point set into the optimal inner point set, and updating the iteration times;
4) repeating 1), 2) and 3), if the iteration frequency is more than K, stopping iteration to obtain an optimal inner point set, and further obtaining the matching characteristic of the target point cloud;
(46) after the rough matching is finished, the contour and the posture of the target object obtained in the step (45) are used as initial values, an ICP algorithm is used for further optimizing a matching result, and finally the contour, the posture, the rotating shaft and the central point of the target object which are accurately matched are generated;
and after the calculation is finished, the rotary shaft and the point cloud pose are represented in the form of a rotary translation vector, the object outline is represented in the form of an envelope, and the result is stored in a file under a specified path or a runtime environment for use by other follow-up modules.
The invention also provides a guided grabbing method, wherein the robot obtains the contour and pose of a target object and a rotating shaft thereof according to the method, converts the contour and pose of the target object into a mechanical arm coordinate system according to the conversion relation between the pose of the tail end of the mechanical arm and the pose of the laser radar obtained by calibrating the hand and the eye in the step (2), calculates the size of the opening of the gripper at the tail end of the mechanical arm according to the contour information of the target object, controls the center of the gripper at the tail end of the mechanical arm to move to the optimal position according to the center point information of the target object, and calculates the orientation of the opening of the gripper at the tail end of the mechanical arm according to the pose information of the target object; and the mechanical arm is controlled to move to the target pose by motion planning, so that the grabbing work is realized.
The hardware system used by the invention is composed of an industrial personal computer, an industrial gigabit cross machine, a mechanical arm, a color camera and a ToF laser radar. The system of the invention is connected with the color camera and a Giga E transmission interface of the ToF laser radar through an industrial gigabit switch by TCP/IP network communication to complete data transmission among the color camera, the ToF laser radar, an industrial personal computer.
Although the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the details of the foregoing embodiments, and various equivalent changes (such as number, shape, position, etc.) may be made to the technical solution of the present invention within the technical spirit of the present invention, and these equivalent changes are all within the protection scope of the present invention.
Claims (10)
1. A target object detection method, characterized by: the method comprises the following steps:
step (1): performing combined calibration on a camera and a laser sensor, and performing hand-eye calibration on the robot;
step (2): the robot respectively collects an image of a target object and laser point cloud, the image is segmented through an image segmentation model to obtain the target object, and a corresponding point cloud model of the target object in the space is obtained by screening from the laser point cloud in combination with the step (1);
and (3): and (3) calculating by adopting a template matching algorithm to obtain the outline, the pose and the center of the point cloud model obtained in the step (2), namely detecting to obtain the target object.
2. The target object detection method according to claim 1, characterized in that: in the step (1), the calibration of the robot by hands and eyes is specifically as follows:
1) acquiring a rotation and translation relation between the camera and the tail end of the mechanical arm according to the camera mounting parameters;
2) fixing the tail end of the mechanical arm position, acquiring the pose of the mechanical arm position, and collecting laser point cloud of the calibration tool through a laser radar;
3) extracting a calibration point of a calibration tool for calibration, acquiring a rotation and translation relation between a laser radar coordinate system and the calibration tool by solving a PnP problem, and recording the terminal pose of the mechanical arm at the moment;
4) and (4) repeating the steps 2) and 3), obtaining the rotation and translation relations between the multiple groups of laser radar coordinate systems and the calibration tool and the corresponding end poses of the mechanical arm, and solving by using a Tsai-Lanz thesis classic two-step method to obtain the conversion relation between the end poses of the mechanical arm and the laser radar poses.
3. The target object detection method according to claim 2, characterized in that: the calibration tool is a calibration plate movably mounted at the tail end of the mechanical arm.
4. The target object detection method according to claim 3, characterized in that: and a plurality of round holes are arranged on the calibration plate to be used as calibration points.
5. The target object detection method according to claim 3, characterized in that: the calibration plate is provided with grids or chequers, and the corresponding calibration points are grid spots or chequer angles.
6. The target object detection method according to claim 1, characterized in that: in the step (2), the step of screening the laser point cloud to obtain a corresponding point cloud model of the target object in the space specifically comprises the following steps:
1) converting the points in the laser point cloud into an image coordinate system according to the step (1), and judging whether the points are one point on a target object; judging whether the point is located in an image mask obtained by image segmentation, and if the point is located in a mask area, indicating that the point is a point on a target object;
2) and storing all points on the target object in the laser point cloud into a new point set to obtain a point cloud model of the target object.
7. The target object detection method according to claim 1, characterized in that: in the step (3), the contour, the pose and the center of the point cloud model obtained in the step (2) are calculated by adopting a template matching algorithm and specifically:
1) establishing a standard point cloud template base: scanning actual objects to obtain corresponding laser point clouds, splicing the laser point clouds to obtain a point cloud template and corresponding size and outline characteristics of the point cloud template, and storing the corresponding point cloud template and the corresponding characteristics of the point cloud template to form a standard point cloud template library;
2) projecting the acquired intensity values of the laser point cloud of the target object on three orthogonal views of a 3D space to obtain three target gray level images; simultaneously acquiring a point cloud template of a corresponding object from a standard point cloud template library, and respectively projecting intensity values of the point cloud template on three orthogonal views of a 3D space to obtain three standard gray level images;
3) respectively generating BRIEF key point descriptor characteristic point pairs in the three target gray level images and the three standard gray level images, and converting the BRIEF key point descriptor characteristic point pairs to laser point clouds of a target object to obtain point pair mapping of the three groups of target point clouds and a standard point cloud template;
4) and matching by using a RANSAC algorithm to obtain the contour and the pose of the target object.
8. The target object detection method according to claim 7, characterized in that: further comprising the precise matching step: and (4) taking the obtained contour and the posture of the target object as initial values, optimizing the matching result by using an ICP (inductively coupled plasma) algorithm, and generating the contour, the posture and the center of the target object which is finally matched accurately.
9. The target object detection method according to claim 7, characterized in that: the RANSAC algorithm specifically comprises the following steps:
41) randomly selecting four pairs of feature point pairs on the target point cloud to construct a minimum envelope sphere, and recording the minimum envelope sphere as a model M, wherein the minimum envelope sphere contains all three feature point pairs converted by standard gray level images; calculating the centroid of the minimum envelope sphere, and calculating the average distance from the centroid to the characteristic point in the envelope sphere as a matching error threshold;
42) calculating projection errors between all points in the target point cloud and the model M, and if the projection errors between a certain point and the model M are smaller than a matching error threshold value, adding the point into the envelope sphere to obtain an inner point set;
43) if the number of the inner points in the current inner point set is larger than that of the inner points in the previous optimal inner point set, updating the current inner point set into the optimal inner point set, and updating the iteration times;
44) and repeating 41), 42) and 43), if the iteration times are more than K, stopping iteration to obtain an optimal inner point set, and further obtaining the matching features of the target point cloud.
10. A guided grasping method to which the target object detection method according to any one of claims 1 to 9 is applied, characterized in that: the method comprises the following steps:
(1) acquiring the outline, the pose and the center of the target object by adopting the target object detection method of any one of claims 1 to 9;
(2) converting the contour and the pose of the target object into a mechanical arm coordinate system according to the conversion relation between the end pose of the mechanical arm and the pose of the laser radar obtained by the hand-eye calibration in the step (1);
(3) calculating the size of a gripper opening at the tail end of the mechanical arm according to the contour of the target object, controlling the center of the gripper at the tail end of the mechanical arm to move to the optimal position according to the central point of the target object, and calculating the orientation of the gripper opening at the tail end of the mechanical arm according to the pose of the target object;
(4) and (4) performing motion planning according to the step (3) and controlling the mechanical arm to move to a target pose, so as to realize grabbing work.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110900383.8A CN113808201A (en) | 2021-08-06 | 2021-08-06 | Target object detection method and guided grabbing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110900383.8A CN113808201A (en) | 2021-08-06 | 2021-08-06 | Target object detection method and guided grabbing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113808201A true CN113808201A (en) | 2021-12-17 |
Family
ID=78893346
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110900383.8A Pending CN113808201A (en) | 2021-08-06 | 2021-08-06 | Target object detection method and guided grabbing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113808201A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114571467A (en) * | 2022-04-07 | 2022-06-03 | 赛那德科技有限公司 | Mechanical arm control method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170305694A1 (en) * | 2014-10-03 | 2017-10-26 | Wynright Corporation | Perception-Based Robotic Manipulation System and Method for Automated Truck Unloader that Unloads/Unpacks Product from Trailers and Containers |
CN109934230A (en) * | 2018-09-05 | 2019-06-25 | 浙江大学 | A kind of radar points cloud dividing method of view-based access control model auxiliary |
CN109927036A (en) * | 2019-04-08 | 2019-06-25 | 青岛小优智能科技有限公司 | A kind of method and system of 3D vision guidance manipulator crawl |
CN110355754A (en) * | 2018-12-15 | 2019-10-22 | 深圳铭杰医疗科技有限公司 | Robot eye system, control method, equipment and storage medium |
CN110497373A (en) * | 2019-08-07 | 2019-11-26 | 大连理工大学 | A kind of combined calibrating method between the three-dimensional laser radar and mechanical arm of Mobile working machine people |
CN111251295A (en) * | 2020-01-16 | 2020-06-09 | 清华大学深圳国际研究生院 | Visual mechanical arm grabbing method and device applied to parameterized parts |
CN112001955A (en) * | 2020-08-24 | 2020-11-27 | 深圳市建设综合勘察设计院有限公司 | Point cloud registration method and system based on two-dimensional projection plane matching constraint |
-
2021
- 2021-08-06 CN CN202110900383.8A patent/CN113808201A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170305694A1 (en) * | 2014-10-03 | 2017-10-26 | Wynright Corporation | Perception-Based Robotic Manipulation System and Method for Automated Truck Unloader that Unloads/Unpacks Product from Trailers and Containers |
CN109934230A (en) * | 2018-09-05 | 2019-06-25 | 浙江大学 | A kind of radar points cloud dividing method of view-based access control model auxiliary |
CN110355754A (en) * | 2018-12-15 | 2019-10-22 | 深圳铭杰医疗科技有限公司 | Robot eye system, control method, equipment and storage medium |
CN109927036A (en) * | 2019-04-08 | 2019-06-25 | 青岛小优智能科技有限公司 | A kind of method and system of 3D vision guidance manipulator crawl |
CN110497373A (en) * | 2019-08-07 | 2019-11-26 | 大连理工大学 | A kind of combined calibrating method between the three-dimensional laser radar and mechanical arm of Mobile working machine people |
CN111251295A (en) * | 2020-01-16 | 2020-06-09 | 清华大学深圳国际研究生院 | Visual mechanical arm grabbing method and device applied to parameterized parts |
CN112001955A (en) * | 2020-08-24 | 2020-11-27 | 深圳市建设综合勘察设计院有限公司 | Point cloud registration method and system based on two-dimensional projection plane matching constraint |
Non-Patent Citations (2)
Title |
---|
L. PANG, ET AL.: "An Efficient 3D Pedestrian Detector with Calibrated RGB Camera and 3D LiDAR", 2019 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 6 December 2019 (2019-12-06), pages 2902 - 2907, XP033691577, DOI: 10.1109/ROBIO49542.2019.8961523 * |
秦宝岭: "基于光流—场景流的单目视觉三维重建研究", 基于光流—场景流的单目视觉三维重建研究, 15 March 2017 (2017-03-15), pages 28 - 29 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114571467A (en) * | 2022-04-07 | 2022-06-03 | 赛那德科技有限公司 | Mechanical arm control method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2023090917A (en) | Robot system with advanced scanning mechanism | |
JP5558585B2 (en) | Work picking device | |
Bone et al. | Automated modeling and robotic grasping of unknown three-dimensional objects | |
US20150224648A1 (en) | Robotic system with 3d box location functionality | |
CN114355953B (en) | High-precision control method and system of multi-axis servo system based on machine vision | |
CN113284179B (en) | Robot multi-object sorting method based on deep learning | |
CN111360821A (en) | Picking control method, device and equipment and computer scale storage medium | |
CN112518748A (en) | Automatic grabbing method and system of vision mechanical arm for moving object | |
Hsu et al. | Development of a faster classification system for metal parts using machine vision under different lighting environments | |
Farag et al. | Grasping and positioning tasks for selective compliant articulated robotic arm using object detection and localization: Preliminary results | |
WO2023017413A1 (en) | Systems and methods for object detection | |
Zhou et al. | Design and test of a sorting device based on machine vision | |
CN113808201A (en) | Target object detection method and guided grabbing method | |
CN112338922B (en) | Five-axis mechanical arm grabbing and placing method and related device | |
Fan et al. | An automatic robot unstacking system based on binocular stereo vision | |
US20240003675A1 (en) | Measurement system, measurement device, measurement method, and measurement program | |
CN115972192A (en) | 3D computer vision system with variable spatial resolution | |
CN117794704A (en) | Robot control device, robot control system, and robot control method | |
JP2022181173A (en) | Transparent object bin picking | |
CN113240670A (en) | Image segmentation method for object to be operated in live-wire operation scene | |
JP2022181174A (en) | Object bin picking with rotation compensation | |
Ren et al. | Vision based object grasping of robotic manipulator | |
CN113989368A (en) | High-precision positioning method and system for object surface | |
Ngo et al. | Development of a Color Object Classification and Measurement System Using Machine Vision. | |
Jiang et al. | Target object identification and localization in mobile manipulations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |