CN113977581A - Grabbing system and grabbing method - Google Patents

Grabbing system and grabbing method Download PDF

Info

Publication number
CN113977581A
CN113977581A CN202111337906.9A CN202111337906A CN113977581A CN 113977581 A CN113977581 A CN 113977581A CN 202111337906 A CN202111337906 A CN 202111337906A CN 113977581 A CN113977581 A CN 113977581A
Authority
CN
China
Prior art keywords
visual information
article
grabbing
articles
grasping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111337906.9A
Other languages
Chinese (zh)
Inventor
吴祺
胡晏莹
王鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shengdoushi Shanghai Science and Technology Development Co Ltd
Original Assignee
Shengdoushi Shanghai Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shengdoushi Shanghai Technology Development Co Ltd filed Critical Shengdoushi Shanghai Technology Development Co Ltd
Priority to CN202111337906.9A priority Critical patent/CN113977581A/en
Publication of CN113977581A publication Critical patent/CN113977581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Manipulator (AREA)

Abstract

The present disclosure discloses a grasping system and a grasping method. The grabbing method is used for grabbing the articles placed in the article containing device and comprises the following steps: carrying out visual information acquisition on an article to be grabbed; processing the collected visual information; determining a grab policy based on the processed visual information; grabbing the article based on the grabbing strategy; processing the collected visual information includes: judging the graspable space, identifying the outlines of all the articles in the graspable space, segmenting different articles from the outlines, performing three-dimensional modeling on the segmented articles, and confirming the relative spatial relationship between the different articles. The method adopts three-dimensional machine vision, so that the outline of the article is accurately identified in the disordered stacking state and three-dimensional modeling is carried out, and the problems of unobvious two-dimensional image characteristics and inaccurate article type identification are solved.

Description

Grabbing system and grabbing method
Technical Field
The disclosure belongs to the technical field of robots, and particularly relates to a grabbing system and a grabbing method.
Background
With the development of robotics, robotic arms have gained widespread use in a variety of settings. The robot can replace the manual work to engage in complicated and highly repetitive work, and the efficiency and the accuracy are far higher than those of the manual work. Furthermore, the robot is able to reach places where it is not suitable for humans, such as dangerous places, and places where the temperature is high and where humans are not suitable for working for too long. For example, in various industrial production lines, robots have been used in large quantities to improve production efficiency and yield; in nuclear power plants, steel plants, and the like, the figure of the robot frequently appears.
At present, robots are also gradually used in the catering industry. In the catering industry, a mechanical arm is mainly adopted to grab articles with regular structures which are placed according to a determined rule according to a preset program.
However, in the catering industry, the mechanical arm is still difficult to grasp food. Taking the fast food industry as an example, for products with irregular appearances, such as chicken legs, if a customer clicks on the chicken leg, when the mechanical arm grabs in a dinner plate containing the chicken leg, the position of each of the plurality of chicken legs stacked together is irregular due to the difference in the shape of each chicken leg, and the position of the chicken leg changes as other chicken legs are grabbed, so that the mechanical arm cannot grab the chicken leg according to a certain program. This situation is also applicable to other unordered stacked videos such as french fries, chicken wings, etc. Moreover, since a plurality of chicken legs are stacked in an unordered manner, in the grasping apparatus based on image recognition, a situation in which the respective chicken legs are connected to form a complete plane may occur in the recognized image, so that it is impossible to determine which chicken leg is the graspable object.
In addition, because the meat products have high appearance similarity and unobvious two-dimensional characteristics, but each product has different differences, the food identification accuracy rate by means of the two-dimensional images is low.
Disclosure of Invention
Based on this, it is necessary to provide a grasping system and a grasping method to solve at least one of the above problems of the grasping manner in the related art.
The present disclosure provides a grasping method for grasping an article placed in an article holding device, including the steps of:
carrying out visual information acquisition on an article to be grabbed;
processing the collected visual information;
determining a grab policy based on the processed visual information;
grabbing the article based on the grabbing strategy;
processing the collected visual information includes:
judging the graspable space, identifying the outlines of all the articles in the graspable space, segmenting different articles from the outlines, performing three-dimensional modeling on the segmented articles, and confirming the relative spatial relationship between the different articles.
According to one aspect of the disclosure, processing the collected visual information further comprises: and judging the boundary of the article containing device, so as to judge the collision of the edge and the bottom of the space which can be grabbed, and avoid grabbing the edge or colliding with the edge and the bottom in the grabbing process.
According to one aspect of the disclosure, the visual information collection is based on three-dimensional machine vision techniques.
According to one aspect of the present disclosure, the grippable state of each of all the grippable items is scored, and the highest scoring item among the scores of all the grippable items is confirmed;
comparing the top scoring term to a threshold; if the highest scoring item exceeds a threshold value, grabbing the item corresponding to the highest scoring item; if the highest scoring item does not exceed the threshold value, judging that no object can be grabbed, and after the placement state of the object is changed, re-grabbing.
According to one aspect of the disclosure, processing the collected visual information further comprises:
and identifying the article to be grabbed and confirming the type of the article.
According to one aspect of the disclosure, processing the collected visual information further comprises:
and confirming the position coordinates and the corresponding postures of the object by combining the three-dimensional model of the object and the recognized object type.
Furthermore, the present disclosure also proposes a grasping system for performing the grasping method of the present disclosure, the grasping system including:
the visual information acquisition device is used for acquiring visual information of the articles in the article containing device;
the visual information processing device receives the visual information from the visual information acquisition device and processes the received visual information;
a grasping apparatus control means configured to be able to receive the processed visual information from the visual information processing means and give a control policy based on the processed visual information;
and the grabbing device is configured to perform corresponding operation on the articles in the article containing device according to the control strategy of the grabbing device control device.
According to one aspect of the present disclosure, the grasping apparatus includes a robot arm and a jaw.
According to an aspect of the present disclosure, the gripping device control means confirms the lower jaw position, angle of the robot arm, and gripping attitude of the gripping jaw in combination with at least one of the kind, position, attitude, center of gravity of the article to be gripped, and the spatial relationship between different articles, and in combination with boundary information of the grippable space and the article containing means.
According to one aspect of the present disclosure, the visual information acquisition device is selected from at least one of a three-dimensional coordinate measuring machine, a three-dimensional laser scanner, or a photographic three-dimensional scanner.
The grabbing system and the grabbing method provided by the disclosure have the following advantages:
1. performing collision detection based on a three-dimensional model of food, calculating the position, the posture, the center of gravity point and possible lower jaw space of an object, finally grading to select an optimal grabbing object, and changing the state of the current object by shifting the object under the condition that no appropriate grabbing object exists so as to obtain an appropriate grabbing object;
2. three-dimensional machine vision is adopted, so that the outline of the article is accurately identified in the disordered stacking state and three-dimensional modeling is carried out;
3. by using the three-dimensional visual recognition technology, the problems of unobvious two-dimensional image characteristics and inaccurate category recognition are solved.
Drawings
Other advantages, features and details of the present disclosure will become apparent from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings. The features and feature combinations mentioned above in the description and the features and feature combinations mentioned below in the description of the figures and/or shown in the figures individually can be used not only in the respectively specified combination but also in other combinations or alone without departing from the scope of the disclosure. In the drawings:
fig. 1 schematically shows a top view of an article to be gripped;
FIG. 2 schematically illustrates a block diagram of a grasping system according to the present disclosure;
FIG. 3 schematically illustrates a flow chart of a grabbing method according to the present disclosure;
fig. 4 is a detailed flowchart based on the flowchart shown in fig. 3.
The figures are purely diagrammatic and not drawn to scale, and moreover they show only those parts which are necessary in order to clarify the disclosure, while other parts may be omitted or merely mentioned briefly. That is, the present disclosure may include other components in addition to those shown in the figures.
Detailed Description
While the present disclosure will be fully described with reference to the accompanying drawings, which contain preferred embodiments of the disclosure, it should be understood before this description that one of ordinary skill in the art can modify the disclosure described herein while obtaining the technical effects of the present disclosure. Accordingly, it should be understood that the description herein is of an extensive disclosure to those skilled in the art, and is not intended to limit the exemplary embodiments described in the present disclosure.
Furthermore, in the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in schematic form in order to simplify the drawing.
Referring to fig. 1, a schematic view of an article to be gripped is schematically shown. In fig. 1, the article 200 to be gripped is shown in the form of a chicken leg, but this is merely a schematic example. In addition to chicken legs, articles such as chicken wings may be provided that are generally similar in shape but are not uniform in specific dimensions. In addition, a feature of such articles is that, since the shape of a single article is not a regular shape, when a plurality of such articles are put together, the arrangement of the individual articles is not necessary, and when one article is grasped, the other articles may change positions.
An article-holding device 100, which may be a dinner tray, for example, is shown in fig. 1. The article-holding device 100 is shown in fig. 1 as having a rectangular cross-section, and the height of its surrounding enclosure may take any suitable height, for example from 5cm to 15cm, and may even be higher. The article-holding device 100 may be a container of uniform specification to facilitate standardized management. For example, the article-holding device 100 may be a standard-sized pannier,
a plurality of chicken legs are placed in the article-holding device 100 and are randomly placed at the time of placement, so that the final discharge result is disordered. Of course, this does not exclude that a part of the chicken legs, for example the rightmost 4 shown in fig. 1, may just be arranged in a certain regular pattern. In general, the plurality of chicken legs are not required to be arranged, the arrangement of the chicken legs is disordered at the same height, and the chicken legs are stacked at different heights.
Fig. 2 schematically illustrates a block diagram of a grasping system according to one embodiment of the present disclosure. The grasping system of the present disclosure is a machine vision based grasping system. As shown in fig. 2, the grasping system includes a visual information acquiring device 300, a visual information processing device 400, a grasping device control device 500, and a grasping device 600.
The gripping device 600 may take a form known in the art, for example, may take the form of a multi-joint robot, i.e., including a robotic arm 601 and a gripping jaw 602 at the end of the robotic arm 601. The mechanical arm 601 may comprise a plurality of arms pivotally connected in turn, thereby achieving a high degree of motion flexibility. The arm 21 at the end of the robot arm 601 is provided with a gripping mechanism for gripping and releasing the article 200, which may for example take the form of a gripping jaw 602, e.g. comprising two or three or even more relatively openable and closable gripping jaws.
The gripping device 600 is arranged as a final actuator in the vicinity of the article-holding device 100 in order to achieve gripping and releasing of the article 200 in the article-holding device 100. The grasping means 600 are also arranged, for example, in the vicinity of an automated catering line, so that after grasping an item 200 in the item-holding device 100, the grasped item 200 is placed, for example, into a dinner tray to be assigned to the corresponding customer.
The grasping means 600 may be controlled to selectively place the grasped items 200 on different trays on the food conveying line according to the chronological order on the order, so that the items involved in the order that is earlier in the order placing time are preferentially grasped and placed into the corresponding trays than the items involved in the order that is later.
The visual information collecting device 300 is used for collecting visual information of the article containing device 100 and the articles contained therein. Since the surfaces of the articles 200 in the article-holding device 100 are irregular and may be stacked, three-dimensional information collection thereof is required. Specifically, the visual information collecting device 300 may obtain information of the plurality of articles 200 and the article containing device 100, such as XYZ coordinates of the surface thereof, color information, laser reflection intensity, and the like, using a three-dimensional coordinate measuring machine, or using at least one of a three-dimensional laser scanner, a photographic three-dimensional scanner, and the like.
The visual information processing means 400 receives the information from the visual information collecting means 300, analyzes the same, and performs analysis of the information to complete three-dimensional modeling of the article-containing device 100 and the article 200 therein. The three-dimensional model created includes the boundaries of the article-holding device 100, the shape of each of the plurality of articles 200, the positional relationship with other articles, and the like.
The respective boundaries of the article-containing device 100 are determined in order to determine a graspable space, thereby determining a moving range of the grasping device, thereby preventing the grasping device from moving outside the article-containing device 100, or from colliding with the enclosure of the article-containing device 100 during the movement, or from colliding with the bottom of the article-containing device 100, or from erroneously grasping the enclosure to flip the article-containing device 100.
Further, the type of the article 200 may be identified based on the three-dimensional model and the grayscale image. Of course, in other embodiments, the item type 200 may also be preset for input. For a situation with a fixed application scene, such as only grabbing a chicken leg or only grabbing a chicken wing, the type of the article to be identified may be input in advance for each grabbing system, so that the step of identifying the type of the article may be omitted.
The grasping apparatus control means 500 receives the information from the visual information processing means 400, and after comprehensively analyzing the information, determines an optimal grasping object, thereby controlling the grasping apparatus 600 to move to grasp the article.
Specifically, the grasping apparatus control apparatus 500 performs the following analysis on the three-dimensional model of the article 200 based on the information received from the visual information processing apparatus 400: the attitude, the center of gravity, and the like of each article are determined so as to determine a possible grasping path and a lower jaw space, respective scores are given to a plurality of articles 200 that can be grasped, and the scores are compared to select an optimum grasping object. The optimal grasping object may be, for example, an object that can be grasped when the grasping apparatus 600 has the shortest moving path, or an object that does not cause large-scale position movement of other objects after grasping.
The grasping system of the present disclosure may be disposed on a food taking path of a customer, and when it is detected that the customer is located at a specific position, a corresponding grasping action for the article 200 is performed. Of course, the gripping system of the present disclosure may also be disposed on a fully automatic catering line, disposed near the conveyor line, and used for gripping the articles 200 in the article-containing device 100 according to the order requirements when the dishes on the conveyor line are conveyed to the predetermined positions.
The various modules and functions of the grasping system of the present disclosure are described above in connection with fig. 1 and 2. Those skilled in the art will appreciate that the modular division in fig. 2 is merely one embodiment that is enumerated for ease of description. In actual implementation, modules that can be functionally combined may be provided as a single module, rather than having to be divided according to the modules of fig. 2. For example, the capture control device 500 and the visual information processing device 400 may be physically implemented as a controller, which performs the functions of the visual information processing device 400 and the capture control device 500.
The following describes the grasping method of the present disclosure with reference to fig. 3 and 4. The grasping method of the present disclosure may utilize the grasping apparatus of the present disclosure, but is not limited to the grasping apparatus of the present disclosure, and other suitable apparatuses capable of implementing the method may be utilized.
In general, referring to fig. 3, the grasping method of the present disclosure includes the steps of:
s1: carrying out visual information acquisition on an article to be grabbed and a device for placing the article;
s2: processing the collected visual information;
s3: analyzing based on the processed visual information and determining a control strategy for the grasping apparatus;
s4: and controlling the gripping device based on the control strategy to grip or stir the article.
Specifically, referring to fig. 4, in step S2, processing the acquired information includes:
edge detection: the whole graspable space is judged by establishing high-precision visual information and processing the visual information;
collision detection: when the collected visual information is processed, the boundary of a device for containing the object to be grabbed is judged by combining the parameters of the grabbing device, so that whether the mechanical arm and the clamping jaw collide with the edge and the bottom of the grabbing space in the grabbing space is judged, and the area where collision is possible is locked;
contour segmentation/three-dimensional modeling: identifying the outlines of all articles in the space capable of being grabbed, segmenting different articles according to the outlines, and carrying out three-dimensional modeling on the articles; and confirm the relative spatial relationship between different items.
Preferably, step S2 may further include the following:
and (3) identifying the type of the article: and identifying the three-dimensional modeled article and confirming the type of the article.
Further, in step S2, the method further includes item position and posture recognition: combining the three-dimensional model of the object and the identified article type (if article type identification is performed), the position coordinates of the article and the corresponding posture (such as lying or not, inclination angle, etc.) are confirmed.
In step S3, performing an analysis based on the processed visual information, and determining a grab policy includes:
calculating the position and the posture of the lower claw: determining the position and angle of a lower claw grabbed by the mechanical arm and the grabbing posture of the clamping jaw by combining at least one of the type, position, posture and center of gravity of the articles and the spatial relationship among different articles and combining the edge detection and the collision detection;
grading the to-be-grabbed articles: scoring the grippable states of all the grippable items, and confirming the item with the highest score;
and (3) decision making: if the highest scoring item exceeds a threshold value, controlling the mechanical arm and the clamping jaw to grab; and if the highest scoring item does not exceed the threshold value, judging that no graspable item exists. At the moment, the mechanical arm is controlled to shift the article to be grabbed, the placing state of the article is changed, and then the whole grabbing process is executed again. The threshold value may be a value such that the gripping means is able to effect gripping of the article when the score exceeds the threshold value. The threshold value may be set manually in advance, or may be determined for a specific object grippable state in each gripping process.
Although the above embodiments have been described by taking chicken legs as an example, those skilled in the art will understand that the article to be grabbed is not limited to chicken legs, but may be chicken wings, etc., and may be irregular articles other than food, or articles in which a plurality of articles are placed together and easily stacked out of order, such as dolls, etc., although the appearance of a single article is regular.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
It will be understood by those skilled in the art that the above described embodiments are exemplary and can be modified by those skilled in the art, and that the structures described in the various embodiments can be freely combined without conflict in structure or principle.
Having described preferred embodiments of the present disclosure in detail, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the scope and spirit of the appended claims, and the disclosure is not limited to the exemplary embodiments set forth herein.

Claims (10)

1. A grasping method for grasping an article placed in an article-holding device, characterized by comprising the steps of:
carrying out visual information acquisition on an article to be grabbed;
processing the collected visual information;
determining a grab policy based on the processed visual information;
grabbing the article based on the grabbing strategy;
processing the collected visual information includes:
judging the graspable space, identifying the outlines of all the articles in the graspable space, segmenting different articles from the outlines, performing three-dimensional modeling on the segmented articles, and confirming the relative spatial relationship between the different articles.
2. The grasping method according to claim 1,
processing the collected visual information further comprises: and judging the boundary of the article containing device, so as to judge the collision of the edge and the bottom of the space which can be grabbed, and avoid grabbing the edge or colliding with the edge and the bottom in the grabbing process.
3. The grasping method according to claim 1,
the visual information acquisition is based on a three-dimensional machine vision technology.
4. The grasping method according to claim 1,
determining the grabbing policy further comprises:
scoring the grippable state of each of all grippable items and confirming the highest scoring item of the scores of all the grippable items;
comparing the top scoring term to a threshold; if the highest scoring item exceeds a threshold value, grabbing the item corresponding to the highest scoring item; if the highest scoring item does not exceed the threshold value, judging that no object can be grabbed, and after the placement state of the object is changed, re-grabbing.
5. The grasping method according to any one of claims 1 to 4,
processing the collected visual information further comprises:
and identifying the article to be grabbed and confirming the type of the article.
6. The grasping method according to claim 5,
processing the collected visual information further comprises:
and confirming the position coordinates and the corresponding postures of the object by combining the three-dimensional model of the object and the recognized object type.
7. A gripping system for performing the gripping method according to any one of claims 1-6, characterized in that the gripping system comprises:
the visual information acquisition device (300) is used for acquiring visual information of the articles (200) in the article containing device (100);
a visual information processing device (400) which receives the visual information from the visual information acquisition device (300) and processes the received visual information;
-a grip device control device (500) configured to be able to receive processed visual information from the visual information processing device (400) and to give a control strategy based on the processed visual information;
a gripping device (600) configured to perform corresponding operations on the articles (200) in the article containing device (100) according to the control strategy of the gripping device control device (500).
8. The grasping system according to claim 7,
the gripping device (600) comprises a robot arm and a gripping jaw.
9. The grasping system according to claim 8,
the gripping device control device (500) confirms the lower jaw position and angle of the mechanical arm and the gripping posture of the gripping jaw by combining at least one of the type, position, posture and center of gravity of the object to be gripped and the spatial relationship between different objects and combining boundary information of the grippable space and the object containing device.
10. The grasping system according to any one of claims 7 to 9,
the visual information acquisition device (300) is selected from at least one of a three-dimensional coordinate measuring machine, a three-dimensional laser scanner, or a photographic three-dimensional scanner.
CN202111337906.9A 2021-11-10 2021-11-10 Grabbing system and grabbing method Pending CN113977581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111337906.9A CN113977581A (en) 2021-11-10 2021-11-10 Grabbing system and grabbing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111337906.9A CN113977581A (en) 2021-11-10 2021-11-10 Grabbing system and grabbing method

Publications (1)

Publication Number Publication Date
CN113977581A true CN113977581A (en) 2022-01-28

Family

ID=79748133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111337906.9A Pending CN113977581A (en) 2021-11-10 2021-11-10 Grabbing system and grabbing method

Country Status (1)

Country Link
CN (1) CN113977581A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114454168A (en) * 2022-02-14 2022-05-10 赛那德数字技术(上海)有限公司 Dynamic vision mechanical arm grabbing method and system and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180039848A1 (en) * 2016-08-03 2018-02-08 X Development Llc Generating a model for an object encountered by a robot
CN111015655A (en) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
WO2020185334A1 (en) * 2019-03-11 2020-09-17 RightHand Robotics, Inc. Item perturbation for picking operations
US20200316782A1 (en) * 2019-04-05 2020-10-08 Dexterity, Inc. Autonomous unknown object pick and place
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN113326932A (en) * 2021-05-08 2021-08-31 清华大学 Object operation instruction following learning method and device based on object detection
CN113420746A (en) * 2021-08-25 2021-09-21 中国科学院自动化研究所 Robot visual sorting method and device, electronic equipment and storage medium
CN113524194A (en) * 2021-04-28 2021-10-22 重庆理工大学 Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180039848A1 (en) * 2016-08-03 2018-02-08 X Development Llc Generating a model for an object encountered by a robot
WO2020185334A1 (en) * 2019-03-11 2020-09-17 RightHand Robotics, Inc. Item perturbation for picking operations
US20200316782A1 (en) * 2019-04-05 2020-10-08 Dexterity, Inc. Autonomous unknown object pick and place
CN111015655A (en) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN113524194A (en) * 2021-04-28 2021-10-22 重庆理工大学 Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN113326932A (en) * 2021-05-08 2021-08-31 清华大学 Object operation instruction following learning method and device based on object detection
CN113420746A (en) * 2021-08-25 2021-09-21 中国科学院自动化研究所 Robot visual sorting method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张雪,等: "基于轮廓分割的草莓叶片三维建模", 农业工程学报, vol. 33, no. 1, 15 February 2017 (2017-02-15), pages 206 - 210 *
樊亚春, 等: "基于形状检索的场景图像三维建模", 高技术通讯, vol. 33, no. 8, 15 August 2013 (2013-08-15), pages 781 - 788 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114454168A (en) * 2022-02-14 2022-05-10 赛那德数字技术(上海)有限公司 Dynamic vision mechanical arm grabbing method and system and electronic equipment
CN114454168B (en) * 2022-02-14 2024-03-22 赛那德数字技术(上海)有限公司 Dynamic vision mechanical arm grabbing method and system and electronic equipment

Similar Documents

Publication Publication Date Title
JP6805465B2 (en) Box positioning, separation, and picking using sensor-guided robots
US11192258B2 (en) Robotic kitchen assistant for frying including agitator assembly for shaking utensil
DE102019130048B4 (en) A robotic system with a sack loss management mechanism
US20210114826A1 (en) Vision-assisted robotized depalletizer
JP6461712B2 (en) Cargo handling device and operation method thereof
US10417521B2 (en) Material handling system and method
US11701777B2 (en) Adaptive grasp planning for bin picking
WO2017015898A1 (en) Control system for robotic unstacking equipment and method for controlling robotic unstacking
CN110420867A (en) A method of using the automatic sorting of plane monitoring-network
US10807808B1 (en) Systems and methods for automated item separation and presentation
CN113351522A (en) Article sorting method, device and system
US11911903B2 (en) Systems and methods for robotic picking and perturbation
CN113858188A (en) Industrial robot gripping method and apparatus, computer storage medium, and industrial robot
CN113977581A (en) Grabbing system and grabbing method
CN112802107A (en) Robot-based control method and device for clamp group
CN113538459A (en) Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection
Jørgensen et al. Designing a flexible grasp tool and associated grasping strategies for handling multiple meat products in an industrial setting
Nguyen et al. Development of a robotic system for automated decaking of 3D-printed parts
JP2024019690A (en) System and method for robot system for handling object
Watanabe et al. Cooking behavior with handling general cooking tools based on a system integration for a life-sized humanoid robot
JP2019501033A (en) Method and equipment for composing batches of parts from parts placed in different storage areas
Ray et al. Robotic untangling of herbs and salads with parallel grippers
US11485015B2 (en) System for eliminating interference of randomly stacked workpieces
US20210001488A1 (en) Silverware processing systems and methods
Kimura et al. Simultaneously determining target object and transport velocity for manipulator and moving vehicle in piece-picking operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20220128

Assignee: Baisheng Consultation (Shanghai) Co.,Ltd.

Assignor: Shengdoushi (Shanghai) Technology Development Co.,Ltd.

Contract record no.: X2023310000138

Denomination of invention: Grab System and Grab Method

License type: Common License

Record date: 20230714

EE01 Entry into force of recordation of patent licensing contract