CN115139325A - Object grasping system - Google Patents

Object grasping system Download PDF

Info

Publication number
CN115139325A
CN115139325A CN202211067628.4A CN202211067628A CN115139325A CN 115139325 A CN115139325 A CN 115139325A CN 202211067628 A CN202211067628 A CN 202211067628A CN 115139325 A CN115139325 A CN 115139325A
Authority
CN
China
Prior art keywords
point cloud
target point
axis
determining
bottle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211067628.4A
Other languages
Chinese (zh)
Other versions
CN115139325B (en
Inventor
刘中元
李勇奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Star Ape Philosophy Technology Shanghai Co ltd
Xingyuanzhe Technology Shenzhen Co ltd
Original Assignee
Star Ape Philosophy Technology Shanghai Co ltd
Xingyuanzhe Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Star Ape Philosophy Technology Shanghai Co ltd, Xingyuanzhe Technology Shenzhen Co ltd filed Critical Star Ape Philosophy Technology Shanghai Co ltd
Priority to CN202211067628.4A priority Critical patent/CN115139325B/en
Publication of CN115139325A publication Critical patent/CN115139325A/en
Application granted granted Critical
Publication of CN115139325B publication Critical patent/CN115139325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/0023Gripper surfaces directly activated by a fluid
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/04Sorting according to size
    • B07C5/10Sorting according to size measured by light-responsive means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • B07C5/362Separating or distributor mechanisms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0063Using robots

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention provides an object grabbing system which comprises a depth camera, a projection machine and a light receiving sensor, wherein the projection machine is used for vertically projecting structured light to a vertically placed pressing bottle, and the light receiving sensor is used for receiving the structured light reflected by the pressing bottle to generate a point cloud picture overlooking the direction of the pressing bottle; the processor module is used for acquiring a point cloud picture of a pressed bottle, intercepting an ROI (region of interest) on the point cloud picture to determine a first target point cloud, determining the longest axis of the first target point cloud, determining the symmetry axis of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and determining an absorption point or a capture point of the pressed bottle according to the symmetry axis; the robot module is used for sucking or grabbing the pressing bottle according to the sucking or grabbing point of the pressing bottle, and the robot module can suck or grab objects with long-strip-shaped liquid outlets at the top, such as the pressing bottle.

Description

Object grasping system
Technical Field
The present invention relates to sorting robots, and in particular, to an object grasping system.
Background
A Sorting robot (Sorting robot) is a robot provided with a sensor, an objective lens and an electron optical system, and can quickly sort goods.
The existing sorting robot usually adopts the mode of putting down the bottle-shaped object and then realizes grabbing through the suction of the sucker-type clamp. However, in an actual scene, there may be a situation where there is no space for laying down, such as the cases where the bottles are closely arranged and the space where the bottles are located is narrow.
In addition, for the small-caliber bottle-shaped objects, the grabbing space at the top of the small-caliber bottle-shaped objects is small, so that the grabbing or sucking of the small-caliber bottle-shaped objects can be realized only by accurately identifying the grabbing space at the top of the small-caliber bottle-shaped objects, but no method capable of accurately identifying the small-caliber bottle-shaped objects exists in the prior art, so that the mechanical arm or the sucking disc does not have corresponding grabbing capacity for the small-caliber bottle-shaped objects with large weight.
Disclosure of Invention
In view of the deficiencies in the prior art, it is an object of the present invention to provide an object grasping system.
According to the present invention there is provided an object grasping system comprising:
the depth camera comprises a projector and a light receiving sensor, the projector is used for vertically projecting structured light to a vertically placed pressing bottle, and the light receiving sensor is used for receiving the structured light reflected by the pressing bottle to generate a point cloud picture looking down the direction of the pressing bottle;
the processor module is used for acquiring a point cloud picture of a pressed bottle, intercepting an ROI (region of interest) on the point cloud picture to determine a first target point cloud, determining the longest axis of the first target point cloud, determining the symmetry axis of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and determining an absorption point or a grabbing point of the pressed bottle according to the symmetry axis;
and the robot module is used for sucking or grabbing the pressed bottle according to the sucking point or the grabbing point of the pressed bottle.
Preferably, the processor module comprises:
the point cloud acquisition unit is used for acquiring a point cloud picture of the pressed bottle and intercepting an ROI (region of interest) on the point cloud picture to determine a first target point cloud;
a longest axis determining unit for determining a longest axis of the first target point cloud;
and the point location determining unit is used for determining a symmetry axis of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and further determining the suction point or the grabbing point of the pressing bottle according to the symmetry axis.
Preferably, the longest axis determining unit includes the following parts:
a point cloud acquisition unit for acquiring the first target point cloud;
a principal component analysis part for performing principal component analysis on the first target point cloud to convert the first target point cloud into a plurality of feature vectors in a feature vector space;
an axial determination unit, configured to determine a longest axis and a shortest axis of the first target point cloud according to the feature vector.
Preferably, the point location determining unit includes the following parts:
a point cloud position adjustment unit configured to orient a longest axis of the first target point cloud toward an X axis of the feature vector space, and orient a shortest axis of the ROI region toward a Z axis of the feature vector space;
a symmetry axis determining part for determining a symmetry axis of the first target point cloud in a longest axis direction according to the longest axis of the first target point cloud;
and the point location determining part is used for determining the suction point or the grabbing point of the pressing bottle according to the symmetry axis.
Preferably, the symmetry axis determining part includes:
a mirror surface turning subsection, which is used for mirror surface turning the first target point cloud along an XZ plane to generate a second target point cloud and determining a turning matrix during turning;
a point cloud registration subsection, configured to register the first target point cloud and the second target point cloud, generate a third target point cloud, and determine a rotation matrix of the first target point cloud during registration;
a normal vector determination part for determining a normal vector of a symmetry axis according to two corresponding points on the first target point cloud and the third target point cloud;
a target point determination subsection for determining a target point on a symmetry axis according to the first target point cloud, the flip matrix, and the rotation matrix;
and a symmetry axis determination subsection for determining a position of the symmetry axis according to the normal vector of the symmetry axis and the target point.
Preferably, the symmetry axis determining part includes:
a mirror surface turning subsection, which is used for mirror surface turning the first target point cloud along an XZ plane to generate a second target point cloud;
a point cloud registration subsection, configured to register the first target point cloud and the second target point cloud, generate a third target point cloud symmetric along an X-axis of the feature vector space, and determine a rotation matrix of the first target point cloud during registration;
and a symmetry axis determination unit for determining a symmetry axis of the third target point cloud and further determining a symmetry axis of the first target point cloud according to the rotation matrix.
Preferably, the point cloud position adjustment unit is configured to control a longest axis of the first target point cloud to coincide with an X-axis orientation of the feature vector space as much as possible, and a shortest axis of the ROI region to coincide with a Z-axis orientation of the feature vector space as much as possible.
Preferably, the projector is used for vertically projecting structured light to a vertically placed pressing bottle, and the light receiving sensor is used for receiving the structured light reflected by the pressing bottle to generate a point cloud picture looking down the direction of the pressing bottle;
the ROI area is an area of a liquid outlet nozzle at the top of the pressing bottle in the point cloud picture.
Preferably, upon determining the ROI including the top liquid outlet on the press bottle, an RGB image looking down the direction of the press bottle is acquired by an RGB camera, the RGB image and the dot cloud image being aligned; firstly, recognizing an ROI (region of interest) comprising a top liquid outlet nozzle on the pressing bottle on the RGB image through a deep learning model, and then determining the ROI comprising the top liquid outlet nozzle on the pressing bottle on the point cloud picture.
Preferably, the robot module comprises an end effector by which the pressed bottle is grasped;
the end effector is an air bag clamp, the air bag clamp including:
the clamp seat is provided with an installation position;
the air bag is arranged at the mounting position; the side wall of the air bag forms a clamping space, the outer side wall of the air bag is subjected to radial supporting force applied by the clamp seat in an inflated state, and the inner side wall of the air bag expands inwards in the radial direction to clamp an object to be clamped;
a first avoidance notch is formed in the clamp seat, and a second avoidance notch is formed in the position, corresponding to the first avoidance notch, of the air bag; the first avoiding notch and the second avoiding notch are used for the liquid outlet nozzle to move into the clamping space.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, a point cloud picture of a pressed bottle is obtained, an ROI (region of interest) area is cut out from the point cloud picture to determine a first target point cloud, the longest axis of the first target point cloud is determined, the symmetry of the first target point cloud in the direction of the longest axis is determined according to the longest axis of the first target point cloud, and a suction point or a grabbing point of the pressed bottle is determined according to the symmetry axis, so that a liquid outlet nozzle at the top is a long strip-shaped object, such as the pressed bottle, to be sucked or grabbed;
according to the method, a first target point cloud is converted into a plurality of characteristic vectors in a characteristic vector space through principal component analysis, the longest axis of the first target point cloud is determined, then a second target point cloud is generated according to the first target point cloud and mirror inversion along an XZ plane, registration is carried out on the second target point cloud to generate a third target point cloud, then the symmetry axis is calculated, accurate calculation of the symmetry axis is achieved when the density of the point cloud on a long strip-shaped object is low, and accurate calculation of an absorption point or a capture point is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts. Other features, objects and advantages of the present invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a first module of an object grasping system according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a second module of the object grasping system according to the embodiment of the invention.
Fig. 3 is a schematic diagram of a third module of the object grasping system according to the embodiment of the invention.
Fig. 4 is a schematic diagram of a fourth module of the object grasping system according to the embodiment of the invention.
Fig. 5 is a schematic diagram of a fifth module of the object grasping system according to the embodiment of the invention.
Fig. 6 is a schematic diagram of a sixth module of the object grasping system according to the embodiment of the invention.
Fig. 7 is a schematic diagram of the principle of object grabbing gesture recognition in the embodiment of the present invention.
Fig. 8 is a schematic diagram of the principle of determining the position of the symmetry axis in the modification of the present invention.
Fig. 9 is a schematic view of an article picking robot sucking a pressed bottle according to an embodiment of the present invention.
Fig. 10 is a schematic diagram of an article picking robot employing an object grasping system according to an embodiment of the present invention.
Fig. 11 is a flowchart illustrating steps of an object capture gesture recognition method according to an embodiment of the present invention.
Fig. 12 is a schematic structural diagram of an object grasp posture recognition apparatus in the embodiment of the present invention.
Fig. 13 is a schematic structural diagram of a computer-readable storage medium in an embodiment of the present invention.
In the figure:
1 is a clamp seat; 2 is an air bag; and 3, a liquid outlet nozzle.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will aid those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any manner. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a first module of an object grasping system according to an embodiment of the present invention, and as shown in fig. 1, the object grasping posture system according to the present invention includes:
the depth camera 101 comprises a projector and a light receiving sensor, the projector is used for vertically projecting structural light to a vertically placed pressed bottle, and the light receiving sensor is used for receiving the structural light reflected by the pressed bottle to generate a point cloud picture looking down the direction of the pressed bottle;
the processor module 102 is used for acquiring a point cloud picture of a pressed bottle, intercepting an ROI (region of interest) on the point cloud picture to determine a first target point cloud, determining a longest axis of the first target point cloud, determining a symmetry axis of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and determining an absorption point or a grabbing point of the pressed bottle according to the symmetry axis;
and the robot module 103 is used for sucking or grabbing the pressed bottle according to the sucking point or the grabbing point of the pressed bottle.
In the embodiment of the invention, the projector is used for vertically projecting structured light to a vertically placed pressing bottle, and the light receiving sensor is used for receiving the structured light reflected by the pressing bottle to generate a point cloud picture looking down the direction of the pressing bottle;
the ROI area is an area of a liquid outlet nozzle at the top of the pressing bottle in the point cloud picture.
In an embodiment of the present invention, the structured light includes lattice structured light, stripe structured light, and coded structured light. When a plurality of pressing bottles are normally placed, the liquid outlet nozzle of the pressing bottle is positioned at the upper end, and the projector is controlled to vertically project structured light to the pressing bottle at an overlooking angle.
The point cloud is a data set, and each point in the data set represents a set of X, Y, Z geometric coordinates and an intensity value that records the intensity of the return signal as a function of the object surface reflectivity. When these points are combined together, a point cloud, i.e., a collection of data points in space representing a 3D shape or object, is formed. The point cloud can also be automatically colored to achieve more realistic visualization.
Figure 2 is a schematic diagram of a second module of the object grasping system according to the embodiment of the invention, as shown in fig. 2, the processor module includes:
a point cloud obtaining unit 1021, configured to obtain a point cloud image of the pressed bottle, and intercept an ROI area on the point cloud image to determine a first target point cloud;
a longest axis determining unit 1022, configured to determine a longest axis of the first target point cloud;
and a point location determining unit 1023, configured to determine a symmetry axis of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and further determine the bottle pressing suction point or the grasping point according to the symmetry axis.
In the embodiment of the invention, when an ROI (region of interest) comprising a top liquid outlet nozzle on the pressing bottle is determined, an RGB (red, green and blue) image looking down on the pressing bottle direction is acquired through an RGB camera, and the RGB image is aligned with the dot cloud picture; firstly, recognizing an ROI (region of interest) comprising a top liquid outlet nozzle on the pressing bottle on the RGB image through a deep learning model, and then determining the ROI comprising the top liquid outlet nozzle on the pressing bottle on the point cloud picture.
Fig. 3 is a schematic diagram of a third module of the object capture system according to the embodiment of the invention, and as shown in fig. 3, the longest axis determining unit 1021 includes the following parts:
a point cloud obtaining unit 10211 configured to obtain the first target point cloud;
a principal component analysis unit 10212, configured to perform principal component analysis on the first target point cloud to convert the first target point cloud into a plurality of feature vectors in a feature vector space;
an axial determining unit 10213, configured to determine a longest axis and a shortest axis of the first target point cloud according to the feature vector.
In the embodiment of the present invention, a PCA principal component analysis method is used in the principal component analysis. The pressing bottle is a common shampoo packaging bottle, a bath foam packaging bottle, a liquid detergent packaging bottle and the like.
Fig. 4 is a schematic diagram of a fourth module of the object grabbing system according to the embodiment of the present invention, and as shown in fig. 4, the point location determining unit 1023 includes the following parts:
a point cloud position adjustment unit 10231 for orienting the longest axis of the first target point cloud to the X axis of the feature vector space and the shortest axis of the ROI region to the Z axis of the feature vector space;
a symmetry axis determining unit 10232, configured to determine a symmetry axis of the first target point cloud in a longest axis direction according to the longest axis of the first target point cloud;
and a point location determining part 10233 for determining the pressing bottle suction point or the grasping point according to the symmetry axis.
In the embodiment of the invention, the longest axis of the first target point cloud is controlled to be consistent with the X-axis orientation of the characteristic vector space as much as possible, and the shortest axis of the ROI area is controlled to be consistent with the Z-axis orientation of the characteristic vector space as much as possible.
Fig. 5 is a schematic diagram of a fifth module of the object capture system in the embodiment of the present invention, fig. 7 is a schematic diagram of recognition of an object capture gesture in the embodiment of the present invention, and as shown in fig. 5 and fig. 7, the symmetry axis determining unit 10232 includes the following parts:
a mirror surface turning subsection 102321, configured to mirror-turn the first target point cloud along an XZ plane to generate a second target point cloud, and determine a turning matrix during the turning;
a point cloud registration subsection 102322, configured to register the first target point cloud and the second target point cloud, generate a third target point cloud, and determine a rotation matrix of the first target point cloud during registration;
a normal vector determination unit 102323, configured to determine a normal vector of a symmetry axis according to two corresponding points on the first target point cloud and the third target point cloud;
a target point determination part 102324 for determining a target point on a symmetry axis from the first target point cloud, the flip matrix and the rotation matrix;
a symmetry-axis determining part 102325 for determining the position of the symmetry axis based on the normal vector of the symmetry axis and the target point.
More specifically, first, the first target point cloud after the PCA principal component analysis transformation is mirrored along the XZ plane, and the point transformation in the first target point cloud may be expressed as:
Figure 498130DEST_PATH_IMAGE001
wherein P is a point in the first target point cloud, and P' is a point in the second target point cloud.
The first and second target point clouds are then ICP registered to generate a third target point cloud, at which point P ″ in the third target point cloud is transformed into:
Figure 320593DEST_PATH_IMAGE002
therefore, for the first target point cloud, namely any point X under the origin cloud coordinate, a normal vector n of the symmetry plane and a certain point Xm on the symmetry plane can be obtained;
Figure 649943DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 848843DEST_PATH_IMAGE004
and X' is a point in the third target point cloud.
Fig. 6 is a schematic diagram of a sixth module of the object grasping system according to the embodiment of the present invention, fig. 8 is a schematic diagram of determining the position of the symmetry axis according to the modified example of the present invention, and as shown in fig. 6 and 8, the symmetry axis determining unit 10232 includes the following components:
a mirror surface turning part 102321, configured to mirror-turn the first target point cloud along an XZ plane to generate a second target point cloud;
a point cloud registration unit 102322, configured to register the first target point cloud and the second target point cloud, generate a third target point cloud symmetric along the X axis of the feature vector space, and determine a rotation matrix of the first target point cloud during registration;
a symmetry axis determining part 102323 for determining a symmetry axis of the third target point cloud and further for determining a symmetry axis of the first target point cloud based on the rotation matrix.
In an embodiment of the present invention, a symmetry axis of the third target point cloud is an X-axis of the feature vector space.
In an embodiment of the present invention, when the object grasping system provided by the present invention is used for grasping a pressed bottle, the method includes the following steps:
step 1: acquiring a point cloud picture of a liquid outlet nozzle on a pressing bottle, and intercepting an ROI (region of interest) on the point cloud picture to determine a first target point cloud; the ROI area is a top view point cloud picture of the liquid outlet nozzle;
step 2: performing principal component analysis on the first target point cloud to determine the longest axis and the shortest axis of the ROI area;
and step 3: projecting the ROI area to a feature vector space, wherein the longest axis of the ROI area is consistent with the direction of an X axis, and the shortest axis of the ROI area is consistent with the direction of a Z axis;
and 4, step 4: mirror surface overturning the first target point cloud along an XZ plane to generate a second target point cloud, registering the first target point cloud and the second target point cloud to generate a third target point cloud, and generating a corresponding overturning matrix and a corresponding rotating matrix at the same time;
and 5: determining a symmetry axis of the third target point cloud on an X axis, further calculating a symmetry axis of the first target point cloud according to the symmetry axis of the third target point cloud and the corresponding turning matrix and the rotation matrix, and calculating the extraction point or the capture point according to the symmetry axis of the first target point cloud;
and 6: the liquid outlet nozzle is sucked or grabbed through an end effector arranged on the robot module.
In an embodiment of the invention, the end effector is preferably a balloon clamp. This gasbag anchor clamps include: the clamp comprises a clamp seat 1 and an air bag 2, wherein the clamp seat 1 is provided with a mounting position; the airbag 2 is arranged in this mounting position. The side wall of the air bag 2 forms a clamping space, the outer side wall of the air bag 2 is subjected to radial supporting force applied by the clamp seat 1 in an inflated state, and the inner side wall of the air bag 2 expands inwards in the radial direction to clamp an object to be clamped;
a first avoidance notch is formed in the clamp seat 1, and a second avoidance notch is formed in the position, corresponding to the first avoidance notch, of the air bag 2; the first avoidance notch and the second avoidance notch are used for the moving channel of the liquid outlet nozzle 3 to enable the liquid outlet nozzle 3 to enter the clamping space.
Specifically, the top of the clamp seat 1 is closed, the top and the side wall of the clamp seat 1 form an accommodating cavity, and the air bag 2 is arranged in the accommodating cavity. Also, the bladder 2 has a top, and the top and side walls of the bladder 2 form a clamping chamber. Treat that liquid outlet 3 on the pressing bottle gets into the centre gripping chamber of gasbag 2 after, through aerifing make its radial expansion cladding pressing bottle to gasbag 2, the lateral wall of gasbag 2 receives the radial holding power that anchor clamps seat 1 applyed under the inflation state, and the inside wall of gasbag 2 radially inwards expands to exert frictional force and press the bottle tightly, in order to carry out operations such as lifting, translation, putting down to pressing the bottle, after removing to the place position, release the gas in the gasbag 2 and make pressing the bottle break away from anchor clamps.
Fig. 10 is a schematic structural diagram of a robot module according to an embodiment of the present invention, and as shown in fig. 10, the robot module provided in the present invention further includes:
the first unit and the second unit are used for storing or/and transporting materials;
the depth camera 101 is provided with a visual scanning area at least covering a first unit for storing or transporting the materials, and is used for visually scanning the materials, acquiring a depth image of the materials and generating pose information and a storage position of the materials according to the depth image;
and the robot unit 103 is in communication connection with the depth camera 101 and is used for receiving the position and posture information and the storage position, judging the placement state of the pressed bottle according to the position and posture and the storage position, and picking the pressed bottle according to the placement state.
In an embodiment of the present invention, the first unit may be configured as a stocker unit 104;
the storage unit 104 is used for storing materials placed in disorder, and the materials are the pressing bottles;
and the robot unit 103 is in communication connection with the depth camera 101 and is used for receiving the position and posture information and the storage position, judging the placement state of the pressed bottle according to the position and posture and the storage position, picking the pressed bottle according to the placement state and transferring the pressed bottle to the second unit.
The second unit may be arranged to transport or store the sorted material, such as a support frame arranged to facilitate the orderly arrangement of the items,
the second unit may further include a transportation unit, so that the robot unit 103 can move the pressing bottle on the supporting frame to the transportation unit.
The depth camera 101 is arranged on a camera support.
When the processor module is configured to execute the steps of the object grabbing gesture recognition method through executing the executable instructions, the point cloud image of the pressing bottle is obtained, the ROI area is cut out on the point cloud image to determine a first target point cloud, the longest axis of the first target point cloud is determined, the symmetry of the first target point cloud in the direction of the longest axis is determined according to the longest axis of the first target point cloud, the pressing bottle sucking point or grabbing point is determined according to the symmetry axis, and the object with the long strip-shaped top liquid outlet nozzle such as the pressing bottle is sucked or grabbed.
Fig. 11 is a flowchart of steps of an object capture posture identification method in an embodiment of the present invention, and as shown in fig. 11, the object capture posture identification method provided by the present invention includes the following steps:
step S1: acquiring a point cloud picture of a pressing bottle, and intercepting an ROI (region of interest) on the point cloud picture to determine a first target point cloud;
step S2: determining a longest axis of the first target point cloud;
and step S3: and determining the symmetry of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and determining the suction point or the grabbing point of the pressing bottle according to the symmetry axis.
The embodiment of the invention also provides object grabbing gesture recognition equipment which comprises a processor and a memory. A memory having stored therein executable instructions of the processor. Wherein the processor is configured to perform the steps of the object grasp gesture recognition method via execution of executable instructions.
As described above, in this embodiment, the suction or the grabbing of the object with the top liquid outlet nozzle in the shape of a long strip, such as a pressed bottle, is implemented by obtaining a point cloud image of the pressed bottle, capturing an ROI region on the point cloud image to determine a first target point cloud, determining the longest axis of the first target point cloud, determining the symmetry of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and determining the suction point or the grabbing point of the pressed bottle according to the symmetry axis.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
Fig. 12 is a schematic structural diagram of an object grasp gesture recognition apparatus in an embodiment of the present invention. An electronic device 600 according to such an embodiment of the invention is described below with reference to fig. 12. The electronic device 600 shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 12, the electronic device 600 is in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, and the like.
Wherein the storage unit stores program code which can be executed by the processing unit 610 such that the processing unit 610 performs the steps according to various exemplary embodiments of the present invention described in the above-mentioned object grasp gesture recognition method section of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 11.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, camera, depth camera, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be understood that although not shown in FIG. 12, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiment of the invention also provides a computer readable storage medium for storing a program, and the program realizes the steps of the object grabbing gesture recognition method when executed. In some possible embodiments, the various aspects of the present invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the present invention described in the above-mentioned object grasp gesture recognition method section of this specification, when the program product is run on the terminal device.
As described above, the program of the computer-readable storage medium of this embodiment can select a single program module or a partial program module to be executed in the loaded flowchart when executing the program, and individually perform the operation control of the partial program module without performing the entire operation of the entire flowchart, thereby being capable of facilitating the operation control of the entire program and facilitating the debugging of an error when the program fails to be executed.
Fig. 13 is a schematic structural diagram of a computer-readable storage medium in an embodiment of the present invention. Referring to fig. 13, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external computing devices (e.g., through the internet using an internet service provider).
In the embodiment of the invention, a point cloud picture of a pressed bottle is obtained, the ROI area is cut out on the point cloud picture to determine a first target point cloud, the longest axis of the first target point cloud is determined, the symmetry of the first target point cloud in the direction of the longest axis is determined according to the longest axis of the first target point cloud, and the suction point or the grabbing point of the pressed bottle is determined according to the symmetry axis, so that the suction or the grabbing of an object with a top liquid outlet nozzle in a long strip shape, such as the pressed bottle, is realized. The method comprises the steps of converting a first target point cloud into a plurality of characteristic vectors in a characteristic vector space through principal component analysis, determining the longest axis of the first target point cloud, then generating a second target point cloud according to the first target point cloud and mirror inversion along an XZ plane, carrying out registration to generate a third target point cloud, then carrying out calculation of a symmetry axis, realizing accurate calculation of the symmetry axis when the density of the point cloud on a strip-shaped object is low, and further realizing accurate calculation of an absorption point or a capture point. In the implementation of the invention, the point cloud detection time for each pressing bottle is 10ms, and the position and direction of the point cloud can be better detected.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (9)

1. An object grasping system, comprising:
the depth camera comprises a projector and a light receiving sensor, the projector is used for vertically projecting structured light to a vertically placed pressing bottle, and the light receiving sensor is used for receiving the structured light reflected by the pressing bottle to generate a point cloud picture looking down the direction of the pressing bottle;
the processor module is used for acquiring a point cloud picture of a pressed bottle, intercepting an ROI (region of interest) on the point cloud picture to determine a first target point cloud, determining the longest axis of the first target point cloud, determining the symmetry axis of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and determining an absorption point or a capture point of the pressed bottle according to the symmetry axis;
the robot module is used for sucking or grabbing the pressed bottle according to the sucking or grabbing point of the pressed bottle, and comprises an end effector for grabbing the pressed bottle through the end effector;
the end effector is an air bag clamp, the air bag clamp comprising:
the clamp seat is provided with an installation position;
the air bag is arranged at the mounting position; the side wall of the air bag forms a clamping space, the outer side wall of the air bag is subjected to radial supporting force applied by the clamp seat in an inflated state, and the inner side wall of the air bag expands inwards in the radial direction to clamp an object to be clamped;
a first avoidance notch is formed in the clamp seat, and a second avoidance notch is formed in the position, corresponding to the first avoidance notch, of the air bag; the first avoidance notch and the second avoidance notch are used as a moving channel of a liquid outlet nozzle at the upper end of the pressing bottle so that the liquid outlet nozzle enters the clamping space.
2. The object grasping system according to claim 1, wherein the processor module includes:
the point cloud acquisition unit is used for acquiring a point cloud picture of the pressed bottle and intercepting an ROI (region of interest) on the point cloud picture to determine a first target point cloud;
a longest axis determining unit for determining a longest axis of the first target point cloud;
and the point location determining unit is used for determining a symmetry axis of the first target point cloud in the direction of the longest axis according to the longest axis of the first target point cloud, and further determining the suction point or the grabbing point of the pressing bottle according to the symmetry axis.
3. The object grasping system according to claim 2, wherein the longest axis determining unit includes:
a point cloud obtaining unit configured to obtain the first target point cloud;
a principal component analysis part for performing principal component analysis on the first target point cloud to convert the first target point cloud into a plurality of feature vectors in a feature vector space;
an axial determination unit, configured to determine a longest axis and a shortest axis of the first target point cloud according to the feature vector.
4. The object grasping system according to claim 2, wherein the point position determining unit includes:
a point cloud position adjustment unit configured to orient a longest axis of the first target point cloud toward an X axis of the feature vector space, and orient a shortest axis of the ROI region toward a Z axis of the feature vector space;
a symmetry axis determining part, configured to determine a symmetry axis of the first target point cloud in a direction of a longest axis according to the longest axis of the first target point cloud;
and the point location determining part is used for determining the suction point or the grabbing point of the pressing bottle according to the symmetry axis.
5. The object grasping system according to claim 4, wherein the symmetry-axis determining section includes:
a mirror surface turning subsection, which is used for mirror surface turning the first target point cloud along an XZ plane to generate a second target point cloud and determining a turning matrix during turning;
a point cloud registration subsection, configured to register the first target point cloud and the second target point cloud, generate a third target point cloud, and determine a rotation matrix of the first target point cloud during registration;
a normal vector determination part for determining a normal vector of a symmetry axis according to two corresponding points on the first target point cloud and the third target point cloud;
a target point determination subsection for determining a target point on a symmetry axis according to the first target point cloud, the flip matrix and the rotation matrix;
and a symmetry axis determination subsection for determining a position of the symmetry axis according to the normal vector of the symmetry axis and the target point.
6. The object grasping system according to claim 4, wherein the symmetry axis determining section includes:
a mirror surface turning subsection, which is used for mirror surface turning the first target point cloud along an XZ plane to generate a second target point cloud;
a point cloud registration subsection, configured to register the first target point cloud and the second target point cloud, generate a third target point cloud symmetric along an X axis of the feature vector space, and determine a rotation matrix of the first target point cloud during registration;
and a symmetry axis determination unit for determining a symmetry axis of the third target point cloud and further determining a symmetry axis of the first target point cloud according to the rotation matrix.
7. The object grasping system according to claim 1, wherein the point cloud position adjustment section is configured to control a longest axis of the first target point cloud to coincide with an X-axis orientation of the feature vector space as much as possible, and a shortest axis of the ROI region to coincide with a Z-axis orientation of the feature vector space as much as possible.
8. The object grabbing system according to claim 1, wherein the projector is configured to vertically project structured light to a vertically placed press bottle, and the light receiving sensor is configured to receive the structured light reflected by the press bottle to generate a cloud point image looking down the direction of the press bottle;
the ROI area is an area of a liquid outlet nozzle on the top of the pressing bottle in the dot cloud picture.
9. The object grasping system according to claim 8, wherein upon determining a ROI region including a top liquid outlet on the press bottle, an RGB image looking down the press bottle direction is acquired by an RGB camera, the RGB image and the cloud point map being aligned; firstly, recognizing an ROI (region of interest) comprising a top liquid outlet nozzle on the pressing bottle on the RGB image through a deep learning model, and then determining the ROI comprising the top liquid outlet nozzle on the pressing bottle on the point cloud picture.
CN202211067628.4A 2022-09-02 2022-09-02 Object grasping system Active CN115139325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211067628.4A CN115139325B (en) 2022-09-02 2022-09-02 Object grasping system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211067628.4A CN115139325B (en) 2022-09-02 2022-09-02 Object grasping system

Publications (2)

Publication Number Publication Date
CN115139325A true CN115139325A (en) 2022-10-04
CN115139325B CN115139325B (en) 2022-12-23

Family

ID=83416664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211067628.4A Active CN115139325B (en) 2022-09-02 2022-09-02 Object grasping system

Country Status (1)

Country Link
CN (1) CN115139325B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050128197A1 (en) * 2003-12-11 2005-06-16 Strider Labs, Inc. Probable reconstruction of surfaces in occluded regions by computed symmetry
CN108010036A (en) * 2017-11-21 2018-05-08 江南大学 A kind of object symmetry axis detection method based on RGB-D cameras
CN109015640A (en) * 2018-08-15 2018-12-18 深圳清华大学研究院 Grasping means, system, computer installation and readable storage medium storing program for executing
CN110580725A (en) * 2019-09-12 2019-12-17 浙江大学滨海产业技术研究院 Box sorting method and system based on RGB-D camera
CN215920503U (en) * 2021-10-15 2022-03-01 星猿哲科技(上海)有限公司 Air bag clamp

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050128197A1 (en) * 2003-12-11 2005-06-16 Strider Labs, Inc. Probable reconstruction of surfaces in occluded regions by computed symmetry
CN108010036A (en) * 2017-11-21 2018-05-08 江南大学 A kind of object symmetry axis detection method based on RGB-D cameras
CN109015640A (en) * 2018-08-15 2018-12-18 深圳清华大学研究院 Grasping means, system, computer installation and readable storage medium storing program for executing
CN110580725A (en) * 2019-09-12 2019-12-17 浙江大学滨海产业技术研究院 Box sorting method and system based on RGB-D camera
CN215920503U (en) * 2021-10-15 2022-03-01 星猿哲科技(上海)有限公司 Air bag clamp

Also Published As

Publication number Publication date
CN115139325B (en) 2022-12-23

Similar Documents

Publication Publication Date Title
US11383380B2 (en) Object pickup strategies for a robotic device
US10124489B2 (en) Locating, separating, and picking boxes with a sensor-guided robot
CN113351522B (en) Article sorting method, device and system
US9659217B2 (en) Systems and methods for scale invariant 3D object detection leveraging processor architecture
JP2015147256A (en) Robot, robot system, control device, and control method
Pan et al. Manipulator package sorting and placing system based on computer vision
CN112802107A (en) Robot-based control method and device for clamp group
CN114037595A (en) Image data processing method, image data processing device, electronic equipment and storage medium
CN114092428A (en) Image data processing method, image data processing device, electronic equipment and storage medium
CN115139325B (en) Object grasping system
JP2019199335A (en) Information processing device, information processing program, and sorting system
CN115471834A (en) Object grabbing posture recognition method, device and equipment and storage medium
CN113284129B (en) 3D bounding box-based press box detection method and device
JPH08315152A (en) Image recognition device
CN117798957A (en) Object gripping system
JP7481926B2 (en) Information processing device, sorting system, and program
CN114022342A (en) Acquisition method and device for acquisition point information, electronic equipment and storage medium
CN114022341A (en) Acquisition method and device for acquisition point information, electronic equipment and storage medium
CN117809232A (en) Object grabbing gesture recognition method, device, equipment and storage medium
JP7481867B2 (en) Control device and program
JP2014174628A (en) Image recognition method
CN113298866B (en) Object classification method and device
US20230286165A1 (en) Systems and methods for robotic system with object handling
CN118115575A (en) Multi-contour object pose recognition system
US20230228688A1 (en) Planar object segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 200240 1, 5, 951 Jianchuan Road, Minhang District, Shanghai.

Patentee after: Star ape philosophy Technology (Shanghai) Co.,Ltd.

Patentee after: Xingyuanzhe Technology (Shenzhen) Co.,Ltd.

Address before: 518102 Room 1201M1, Hengfang Science and Technology Building, No. 4008, Xinhu Road, Yongfeng Community, Xixiang Street, Baoan District, Shenzhen City, Guangdong Province

Patentee before: Xingyuanzhe Technology (Shenzhen) Co.,Ltd.

Patentee before: Star ape philosophy Technology (Shanghai) Co.,Ltd.

CP03 Change of name, title or address