CN114683251A - Robot grabbing method and device, electronic equipment and readable storage medium - Google Patents

Robot grabbing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114683251A
CN114683251A CN202210335769.3A CN202210335769A CN114683251A CN 114683251 A CN114683251 A CN 114683251A CN 202210335769 A CN202210335769 A CN 202210335769A CN 114683251 A CN114683251 A CN 114683251A
Authority
CN
China
Prior art keywords
grabbing
pose
target
robot
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210335769.3A
Other languages
Chinese (zh)
Inventor
李明洋
许雄
邵威
杨帆
汪辉
戚祯祥
龚子轩
王家鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jaka Robotics Ltd
Original Assignee
Shanghai Jaka Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jaka Robotics Ltd filed Critical Shanghai Jaka Robotics Ltd
Priority to CN202210335769.3A priority Critical patent/CN114683251A/en
Publication of CN114683251A publication Critical patent/CN114683251A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Fuzzy Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The application provides a robot grabbing method and device, electronic equipment and a readable storage medium, and relates to the technical field of robots. The method comprises the following steps: training a target object grabbed by the robot through a detection model, and determining target data of the target object; determining an initial grabbing pose set when the robot grabs the target object according to the target data through a pose model; and screening the initial grabbing pose set to obtain a target grabbing pose for the robot to grab the target object according to the target grabbing pose. The method and the system can adopt a plurality of deep learning neural network models to train the grabbed object and the grabbed pose respectively, screen a plurality of determined poses, and demonstrate grabbing of the robot by the grabbed pose of the object. Can demonstrate the snatching of the object of multiple different kind, position, enlarge the range of application of teaching, improve the robot and snatch the success rate when sorting according to the teaching later.

Description

Robot grabbing method and device, electronic equipment and readable storage medium
Technical Field
The application relates to the technical field of robots, in particular to a robot grabbing method and device, electronic equipment and a readable storage medium.
Background
With the continuous development of the robot technology, the industrial robot has great significance in the industrial production field of the human society, and the automatic production line based on the industrial robot is widely applied in the fields of machining, welding, spraying, assembling, carrying and the like in the manufacturing industry. Industrial robots need to be taught before actual motion.
Currently, when work changes, a lot of time and effort must be spent reprogramming the verification points, since the robot needs to follow a specific trajectory to complete the grabbing and placing tasks. In the existing teaching mode, for example, the demonstration programming technology can realize the demonstration grabbing of a robot to a specific object by the robot, so that the programming time and the workload are saved, but the teaching mode is only suitable for objects with regular shapes, when objects with irregular shapes or even arbitrary shapes in some scenes are grabbed, the existing demonstration programming technology is difficult to estimate the grabbing pose and cannot ensure the grabbing reliability, so that the existing teaching mode for the robot cannot be suitable for more application scenes, and the grabbing and sorting task success rate of the robot is low.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a robot grabbing method, a robot grabbing device, an electronic device, and a readable storage medium, so as to solve the problem in the prior art that a robot grabbing task success rate is low.
In order to solve the above problem, in a first aspect, the present application provides a robot grasping method, including:
training a target object grabbed by a robot through a detection model, and determining target data of the target object, wherein the target data comprises position information and object information of the target object;
determining an initial grabbing pose set when the robot grabs the target object according to the target data through a pose model;
and screening the initial grabbing pose set to obtain a target grabbing pose, so that the robot can grab the target object according to the target grabbing pose.
In the implementation process, when the grabbing of the robot is taught, the grabbing is processed through the two deep learning neural network models respectively. Through the detection model, the positions and the types of the grabbed target objects can be trained to obtain corresponding target data. Through the position model, an initial grabbing pose set when the robot grabs a target object can be determined according to target data, a plurality of redundant poses are removed through screening of the initial grabbing pose set, the target grabbing pose with high grabbing success rate is obtained and is used for the robot to grab, and grabbing teaching is completed. The robot can be taught the grabbing conditions of objects of various different types and positions, the application range of the teaching is enlarged, the robot is suitable for more application scenes, and the success rate of the robot in grabbing and sorting according to the teaching is improved.
Optionally, the determining, by the pose model and according to the target data, an initial grabbing pose set when the robot grabs the target object includes:
acquiring a local image of the target object shot by a first camera and a depth image registered with the local image, wherein the first camera is a camera which is shot locally and corresponds to a grabbing tail end of the robot;
and processing the local image and the depth image according to the target data through the pose model to obtain a plurality of initial grabbing poses when the grabbing tail end grabs the target object, and taking the plurality of initial grabbing poses as the initial grabbing pose set.
In the implementation process, when a plurality of initial grabbing poses are determined, a local image and a depth image which is registered with the local image when the robot carries out local shooting when the robot grabs can be acquired from a first camera corresponding to a coordinate system of a grabbing tail end of the robot. The two images are processed by combining the pose model with target data of the target object, so that the actual distance between the sensor and the target object is trained, initial grabbing poses for grabbing the target object in the scene are obtained, and a corresponding initial grabbing pose set is obtained by collecting. The grabbed objects can be rigid objects and flexible objects, the grabbing pose of any object can be trained and simulated in any scene, and the grabbing of the objects of different types and different positions is realized.
Optionally, the screening the initial grabbing pose set to obtain a target grabbing pose includes:
performing primary screening on the initial grabbing pose set to obtain a first grabbing pose set meeting grabbing requirements;
and performing secondary screening on the first grabbing pose set to obtain the target grabbing pose with the grabbing success rate reaching a success threshold value.
In the implementation process, the feasibility and accessibility of the plurality of initial grabbing poses obtained by training in the pose model are different during grabbing, so that the success rates of the plurality of initial grabbing poses are different, and the grabbing poses with high success rate can be obtained as final target grabbing poses by performing multi-stage screening on the initial grabbing pose sets. The method can delete a plurality of redundant grabbing poses in the grabbing pose set, determine a feasible first grabbing pose set meeting grabbing requirements through primary screening, and acquire the target grabbing pose when the grabbing success rate reaches a success threshold value through secondary screening. The coarse screening and the fine screening of the grabbing pose set are realized through multi-stage screening, the screening precision and accuracy are effectively improved, and the success rate of the robot in grabbing the pose with the target is improved.
Optionally, the performing a first-level screening on the initial grabbing pose set to obtain a first grabbing pose set meeting the grabbing requirement includes:
acquiring vector data corresponding to each initial grabbing pose in the initial grabbing pose set;
determining the grabbing requirement corresponding to the target object based on the position information and the object information;
and screening each vector data based on the grabbing requirements, taking the initial grabbing pose of which the vector data meets the grabbing requirements as a first grabbing pose, and taking a plurality of first grabbing poses as a first grabbing pose set.
In the implementation process, when the initial grabbing pose set is roughly screened in the first level, each initial grabbing pose can be decomposed into corresponding vector data, and the grabbing requirement of the target object in grabbing is determined according to the position information and the object information of the target object. And determining a first grabbing pose set capable of grabbing through screening of the vector data according to the obtained grabbing requirements, and deleting a plurality of grabbing poses incapable of grabbing.
Optionally, the performing second-level screening on the first grabbing pose set to obtain the target grabbing pose with the grabbing success rate reaching a success threshold includes:
calculating motion trail data of each first grabbing pose;
according to the motion trail data of each first grabbing pose, determining a plurality of second grabbing poses from the first grabbing pose set;
determining the grabbing success rate of each second grabbing pose according to a historical pose database in the pose model;
and determining the target grabbing pose with the grabbing success rate reaching the success threshold.
In the implementation process, when the first grabbing pose set is subjected to secondary fine screening, due to the fact that the length of the robot arm is different from the situation of the obstacle in the scene, in order to guarantee that the robot can reach the corresponding grabbing poses to grab, motion trajectory data of the mechanical arm when the mechanical arm grabs based on each first grabbing pose can be calculated based on a kinematic algorithm, and therefore whether the tail end of the mechanical arm can reach the pose to grab is judged according to the motion trajectory data, and a plurality of second grabbing poses capable of being reached are determined. And determining the grabbing success rate of each second grabbing pose when grabbing according to the historical pose database so as to obtain the object grabbing pose with the success rate meeting the success threshold value through screening, thereby effectively ensuring the feasibility, accessibility and success rate of the object grabbing pose.
Optionally, the training a target object grabbed by the robot through the detection model to determine target data of the target object includes:
training according to the detection model parameters and the target object through the detection model to obtain a target detection model;
acquiring a global image of a captured scene based on a second camera, wherein the second camera is a camera for shooting the captured scene globally;
and determining the position information and the object information corresponding to the target object according to the global image through the target detection model, and taking the position information and the object information as the target data.
In the implementation process, the target object is learned by combining the detection model of the deep neural network with the pre-trained detection model parameters, the target detection model with high accuracy can be obtained through training, the global image in the second camera for globally shooting the whole captured scene is obtained, the global image is input into the target detection model, and the position information and the object information of the target object in the image can be predicted in an inference mode to serve as target data. The teaching training device can be used for carrying out the teaching training of grabbing on any objects in different types and different positions, improves the universality of grabbing targets, enlarges the application range of teaching, and improves the accuracy and pertinence of target data.
Optionally, after the target object grabbed by the robot is trained through the detection model and the target data of the target object is determined, the method further includes:
determining a grabbing position of the robot when the robot grabs the target object according to the position information;
and driving the robot to move to the grabbing position.
In the implementation process, after the target data of the target object is obtained through training, the robot is driven to move to the grabbing position when the target object is grabbed according to the obtained position information, and the robot is convenient to grab the target object in the grabbing position.
Optionally, the method further comprises:
tracking the motion path of the target object through the detection model to obtain a target track;
and driving the robot to place the grabbed target object based on the target track.
In the implementation process, when the grabbing of the robot is taught, the placing of the grabbed target object can be also taught. Through the detection model, the motion path of the object in the migration process can be tracked and trained, and therefore the grabbed target object can be placed according to the placed target track. The whole motion process of the target object can be tracked, and the captured placing process is supplemented, so that the capturing process of the object is completely taught.
In a second aspect, the present application further provides a robotic grasping device, the device comprising:
the object training module is used for training a target object grabbed by the robot through a detection model and determining target data of the target object, wherein the target data comprise position information and object information of the target object;
the pose determining module is used for determining an initial grabbing pose set when the robot grabs the target object according to the target data through a pose model;
and the pose screening module is used for screening the initial grabbing pose set and acquiring a target grabbing pose so that the robot can grab the target object according to the target grabbing pose.
In the implementation process, target data of the captured target object is obtained through training of the object training module based on the detection model; training by a pose determination module based on a pose model to obtain an initial capture pose set; and screening and deleting the redundant poses in the initial grabbing pose set through a pose screening module, and determining the target grabbing pose with higher grabbing success rate so as to be grabbed by the robot and finish grabbing teaching. The robot can be based on two deep learning neural network models to teach the grabbing conditions of objects of various different types and positions, the application range of teaching is enlarged, the robot is suitable for more application scenes, and the success rate of the robot in grabbing and sorting according to the teaching is improved.
In a third aspect, the present application further provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores program instructions, and when the processor reads and runs the program instructions, the processor executes steps in any implementation manner of the robot capture method.
In a fourth aspect, the present application further provides a computer-readable storage medium, where computer program instructions are stored, and when the computer program instructions are read and executed by a processor, the steps in any implementation manner of the robot grabbing method are executed.
In summary, the application provides a robot grabbing method, a robot grabbing device, an electronic device and a readable storage medium, wherein a plurality of deep learning neural network models are used for respectively training grabbed objects and grabbing poses, a plurality of determined poses are screened, and the grabbing of the robot is taught by determining a target grabbing pose with a high success rate during grabbing. Can be to the snatching of the object of multiple different types, position demonstrate, improved the universality of demonstration, enlarged the range of application of demonstration, improved the success rate when the robot snatchs the letter sorting after according to the demonstration.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a robot grabbing method according to an embodiment of the present disclosure;
fig. 3 is a detailed flowchart of a step S300 according to an embodiment of the present disclosure;
fig. 4 is a detailed flowchart of a step S400 provided in an embodiment of the present application;
fig. 5 is a detailed flowchart of step S410 according to an embodiment of the present disclosure;
fig. 6 is a detailed flowchart of step S420 according to an embodiment of the present disclosure;
fig. 7 is a detailed flowchart of a step S200 according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a robot gripping device according to an embodiment of the present application.
An icon: 100-an electronic device; 111-a memory; 112-a memory controller; 113-a processor; 114-peripheral interfaces; 115-input-output unit; 116-a display unit; 500-a robotic grasping device; 510-an object training module; 520-pose determination module; 530-pose screening module.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of them. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without any creative effort belong to the protection scope of the embodiments of the present application.
In the existing teaching mode for the grabbing task of the robot, when work changes, the robot needs to follow a specific track to complete the grabbing and placing tasks, so that a lot of time and energy are needed to reprogram the point checking positions. In addition, in the currently commonly adopted teaching methods, for example, jog teaching or demonstrator teaching, these teaching methods need to control the robot to move the end to a designated position first, and then control the robot to change the posture of the end tool, so that the whole adjusting process is time-consuming and labor-consuming.
Because various irregular objects even in any shapes exist in an industrial scene, when the specific objects are grabbed, for example, various devices with complex workpiece surface structures, engineering personnel with strong technical experience are needed to demonstrate programming tracks, the positions and postures of the robot are continuously and repeatedly changed to reach the required terminal pose, and finally the reliability and success rate of the terminal grabbing pose when the robot grabs cannot be ensured, so that the working efficiency and usability of the robot grabbing are greatly reduced, the current method for teaching the robot cannot be applied to more application scenes, and the success rate of grabbing and sorting tasks of the robot is low.
In order to solve the above problem, an embodiment of the present application provides a robot grasping method, which is applied to an electronic device, where the electronic device may be an electronic device with a logic calculation function, such as a server, a Personal Computer (PC), a tablet PC, a smart phone, and a Personal Digital Assistant (PDA), and can simulate and calculate a grasping pose in a robot grasping task, so as to improve success rate and stability of robot grasping.
Optionally, referring to fig. 1, fig. 1 is a block schematic diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 100 may include a memory 111, a memory controller 112, a processor 113, a peripheral interface 114, an input-output unit 115, and a display unit 116. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely exemplary and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The above-mentioned elements of the memory 111, the memory controller 112, the processor 113, the peripheral interface 114, the input/output unit 115 and the display unit 116 are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 113 is used to execute the executable modules stored in the memory.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is configured to store a program, and the processor 113 executes the program after receiving an execution instruction, and the method executed by the electronic device 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 113, or implemented by the processor 113.
The processor 113 may be an integrated circuit chip having signal processing capability. The Processor 113 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 114 couples various input/output devices to the processor 113 and memory 111. In some embodiments, the peripheral interface 114, the processor 113, and the memory controller 112 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 115 is used to provide input data to the user. The input/output unit 115 may be, but is not limited to, a mouse, a keyboard, and the like.
The display unit 116 provides an interactive interface (e.g., a user operation interface) between the electronic device 100 and the user or is used for displaying image data to the user for reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing. In the embodiment of the present application, the display unit 116 may display the capture scene, the captured images of the first camera and the second camera, a plurality of capture poses in the initial capture pose set, and the like.
The electronic device in this embodiment may be used to perform each step in each robot grasping method provided in this embodiment. The following describes in detail the implementation of the robot gripping method by means of several embodiments.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a robot grabbing method according to an embodiment of the present disclosure, where the method may include steps S200-S400.
Step S200, training a target object grabbed by the robot through a detection model, and determining target data of the target object.
The target object can be any position and any kind of object in a captured scene, so that the target object is trained based on the detection model during capturing, and the obtained target data comprises position information and object information of the target object. The detection model can be a variety of different deep learning neural network models, such as YOLOV3, YOLOV4, YOLOV5, etc. in the YOLO series model based on the target detection algorithm. The position, the type and the migration process of the object can be trained, and the target detection and the motion tracking can be carried out on the target object in real time.
Alternatively, the position information of the target object may include position coordinates of the target object in the overall coordinates of the captured scene, or position information of the object frame, and the like, and the object information may include information such as a type, a tag, a structure, and the like of the target object, for example, information of which device the target object belongs to, and a corresponding structural feature thereof.
For example, the target object may be an object in a variety of application scenarios, such as a grab scenario in an industrial scenario, a sort scenario in a warehouse scenario, and so on. The target object can be various different articles or devices in various scenes, for example, in an industrial scene, the target object can be various industrial devices, such as devices with complex workpiece surface structures, such as screws and nuts of various types, simple devices with regular public surface structures, such as block-shaped devices, and the like; the target object may also be a physical parcel of a variety of different shapes, different sizes, etc. in a warehousing scenario. Therefore, the robot can realize grabbing teaching of objects in different scenes and different types, the universality of the teaching is effectively improved, and the robot can flexibly deal with the change of the environment and the task.
And step S300, determining an initial grabbing pose set when the robot grabs the target object according to the target data through a pose model.
The pose model may be a variety of deep learning Neural Network models for training capture, such as a model obtained by Grasp Net (An Efficient Convolutional Neural Network) training for Real-time monitoring of capture of Low-power Devices, which has a large-scale and high-quality training data set, or a model for training capture based on a CNN (Convolutional Neural Network) model. A plurality of poses of the robot when grabbing the target object are calculated and trained through the pose model according to the target data, and an initial grabbing pose set of the target object when grabbed can be determined.
Optionally, the grassnet-1 Billion of grassnet includes a reference data set of 190 cluttered complex scenes and a convolutional neural network model with strong generalization capability. The combination of GraspNet-1Billion and Any Grasp (the first model of a solution to achieve human horizontal grabbing in cluttered scenes) enables grabbing Any object in Any scene, which may include objects of rigid objects and objects of flexible objects, to meet the requirements of existing flexible manufacturing. Alternatively, in order to improve the reliability of grasping, a suction cup may be provided in the grasping end of the robot, and a reference for grasping the suction cup may be expressed at the time of modeling. Because GraspNet-base defines the grabbing pose of two fingers of clamping jaws into nine parameters such as a three-dimensional approach vector, a spindle rotation angle, a three-dimensional approach point coordinate, an approach distance and the opening degree of the clamping jaws, when training is carried out based on GraspNet-1Billion, a deep learning neural network model GraspNet-base can be obtained based on the training of a reference data set and used as a pose model. GraspNet-base is an end-to-end network structure model, and the training structure layer of the GraspNet-base can include: extracting cloud features by using a point encoder-decoder for scene point cloud with N point coordinates as input, and sampling M points by using C-dim features; predicting an approximation vector by an approximation network pair to use the approximation vector for grouping points in the cylindrical region; the Operation Net is used for predicting Operation parameters, the Tolerance Net is used for predicting grabbing robustness and the like.
And S400, screening the initial grabbing pose set to obtain a target grabbing pose, so that the robot grabs the target object according to the target grabbing pose.
The initial grabbing pose set comprises a plurality of redundant grabbing poses with low feasibility or reliability, the grabbing success rate of the redundant grabbing poses is low, and therefore the initial grabbing pose set needs to be screened, the target grabbing poses with high success rate are obtained from the initial grabbing pose set and serve as teaching results, a robot can grab a target object with the target grabbing poses, and the success rate of grabbing by the robot is improved.
Optionally, after the robot captures the target object, the robot may also track the motion path of the target object through the detection model to obtain a target track; and the driving robot places the grabbed target object based on the target track. The detection model can also learn and train the transfer process of placing the target object so as to track the motion path of the target object when the target object is placed, and then the captured target object is placed according to the placed target track. The whole motion process of the target object can be tracked, and the captured placing process is supplemented, so that the capturing process of the object is completely taught.
It is worth explaining that in the robot grabbing method in the embodiment of the application, when the grabbing task of the robot is taught, the deep learning neural network model is used for training, so that the teaching can be rapidly performed, the code programming amount during teaching and the workload during field programming can be effectively reduced, and the labor cost and the time cost are reduced.
In the embodiment shown in fig. 2, the grabbing of objects of various types and positions in different scenes can be taught, the application range of the teaching is expanded, the teaching device is suitable for more application scenes, and the success rate of the robot in grabbing and sorting according to the teaching is improved.
Optionally, referring to fig. 3, fig. 3 is a detailed flowchart of step S300 according to an embodiment of the present disclosure, and step S300 may further include steps S310 to S320.
Step S310, a local image of the target object captured by a first camera and a depth image registered with the local image are acquired.
The first camera is a camera which corresponds to the grabbing tail end of the robot and carries out local shooting. The local image captured in the first camera may be acquired through a wireless connection or a wired connection with the first camera, for example, through bluetooth, a wired network, a wireless network, etc. The acquired partial image is an image of an RGB color mode, and various colors can be obtained by variation of three color channels of red (R), green (G), and blue (B) and superposition of the three color channels with each other. The depth image is a DepthMap (depth map) of an image or image channel containing information on the distance of the surface of a scene object of a viewpoint in 3D computer graphics. The depth image and the RGB local image are registered images after image registration processing, each pixel value in the depth image represents the actual distance between a sensor and an object, and therefore pixel points of the local image and the depth image have one-to-one correspondence.
Optionally, the general industrial robot is a multi-joint multi-degree-of-freedom mechanical arm, and the mechanical arm is driven by a plurality of rotating motors to realize controllable positioning drive of the tail end of the robot. When grabbing, in order to locate the robot, the position coordinates of the robot in the grabbing scene are obtained by using the camera, so that the robot operates the target according to the image obtained by the camera, namely grabbing according to the robot vision. In order to establish a relationship between the coordinate systems of the camera (i.e. the eye of the robot) and the robot end (i.e. the hand of the robot), calibration of the coordinate systems of the robot and the camera is required, and the calibration process is hand-eye calibration. The hand-eye calibration comprises two modes of 'eyes on the hand' and 'eyes outside the hand': the eyes are out of the hands, namely, the camera is placed at a fixed position, and the relative position of the camera and the coordinate system of the robot is unchanged; the "eye on hand", i.e. the camera and robot hand are bound together, the camera coordinate system follows the robot tip movement.
It is worth mentioning that, in the robot grasping method in the embodiment of the present application, on the basis of the machine vision, in order to shoot the target object during grasping, the first camera bound to the grasping end is adopted at the grasping end of the robot, and the first camera corresponds to the coordinate system of the grasping end, so that the first camera shoots the target object during grasping at the grasping end of the robot, and image information of the first person and more image details of the specific area can be acquired. Considering that the coordinate axes of the tool coordinate system at the tail end of the robot and the camera coordinate system are parallel, the calibration of the first camera can be simplified into translation transformation in three directions, the first camera can provide a good first-person observation visual angle, and a hardware basis is provided for the estimation of the position and the attitude of any object. For example, the first camera may be bound to the grasping end of the robot through various connectors, which may include various connecting assemblies of different structures, such as screws and nuts, buckles, and adapter plates.
Step S320, processing the local image and the depth image according to the target data through the pose model to obtain a plurality of initial grabbing poses when the grabbing tail end grabs the target object, and taking the plurality of initial grabbing poses as the initial grabbing pose set.
The method comprises the steps of inputting a local image and a depth image into a pose model for training, wherein the pose model can be used for processing a plurality of registered pixel points which can represent the actual distance between a sensor and an object in the local image and the depth image according to position information and object information in target data, so that the external contour of the target object is learned and trained, a plurality of initial grabbing poses used for grabbing the target object are obtained, and an initial grabbing pose set is obtained by collecting the initial grabbing poses.
In the embodiment shown in fig. 3, the grabbing pose of any object can be trained and simulated in any scene, and a corresponding initial grabbing pose set is obtained.
Optionally, referring to fig. 4, fig. 4 is a detailed flowchart of step S400 provided in the present embodiment, and step S400 may further include steps S410 to S420.
And S410, performing primary screening on the initial grabbing pose set to obtain a first grabbing pose set meeting grabbing requirements.
The initial grabbing pose set has a plurality of redundant grabbing poses, and the redundant grabbing poses are low in feasibility and accessibility during grabbing, namely the poses cannot be grabbed or cannot reach the grabbing poses for grabbing. And performing multi-stage screening on the initial grabbing pose set, and performing coarse screening and fine screening on the grabbing poses respectively. The first-stage screening is coarse screening, the grabbing pose which does not meet the grabbing requirement is removed in the coarse screening, the grabbing requirement is met, and a first grabbing pose set which is high in grabbing feasibility is reserved.
And step S420, performing secondary screening on the first grabbing pose set to obtain the target grabbing pose with the grabbing success rate reaching a success threshold value.
The second-stage screening is fine screening, multiple objects in the first grabbing pose set are high in feasibility, the object grabbing poses with the robot grabbing success rate reaching the success threshold value can be continuously screened from the grabbing poses capable of being grabbed, and the success threshold value can be a final power value set by a worker according to actual conditions and requirements during grabbing. The grabbing pose with higher success rate can be obtained by centralized screening of the first grabbing pose as the target grabbing pose, and the success rate of the robot in grabbing the target grabbing pose is improved.
In the embodiment shown in fig. 4, the rough screening and the fine screening of the initial grabbing pose set are realized by performing multi-stage screening on the initial grabbing pose set, so that the screening precision and accuracy are effectively improved, and the success rate of the robot grabbing in the target grabbing pose is improved.
Optionally, referring to fig. 5, fig. 5 is a detailed flowchart illustrating step S410 according to an embodiment of the present disclosure, and step S410 may further include steps S411 to S413.
Step S411, obtaining vector data corresponding to each initial grabbing pose in the initial grabbing pose set.
In the first-stage screening, each initial grabbing pose can be decomposed into corresponding vector data, and the vector data comprises a plurality of vectors in different directions, such as an approach vector in the horizontal direction and an approach vector in the vertical direction.
Step S412, determining the grabbing requirement corresponding to the target object based on the position information and the object information.
And determining a grabbing requirement when the target object is grabbed according to the position information and the object information in the acquired target data. For example, when the target object is placed on a desktop or the ground, the grabbing requirement of the target object may be an angle requirement that needs a larger vertical angle with the desktop or the ground; when the target object is placed on the wall body, the grabbing requirement of the target object can be that the target object needs to be grabbed from other angles except the wall body; when the target object is placed at a corner, the grabbing requirement of the target object can be an angle requirement with a larger vertical angle of a wall body of the corner, and the like.
Step S413, screening each vector data based on the grabbing requirement, taking the initial grabbing pose of which the vector data meets the grabbing requirement as a first grabbing pose, and taking a plurality of first grabbing poses as the first grabbing pose set.
According to the grabbing requirement of the target object, vector data corresponding to each initial grabbing pose can be eliminated and screened in a conical area mode. For example, when the target object is placed on a desktop or the ground, since the grabbing requirement is an angle requirement that the vertical angle with the desktop or the ground is required to be large, the approach vector in the vertical direction in the vector data can be set as a, the included angle between a and the vertical direction is c, the approach vector in the horizontal direction is set as b, the included angle between b and the vertical direction is d, and the set conical angle is e. During screening, if c is less than e/2 and d is less than e/2, the approach vector b is eliminated, the approach vector a is reserved, and some initial grabbing poses with low feasibility during grabbing from the bottom of the desktop and grabbing with the desktop horizontally can be deleted.
In the embodiment shown in fig. 5, a first set of grab poses capable of grabbing can be determined by screening vector data according to the obtained grab requirements, and a plurality of grab poses incapable of grabbing are deleted.
Optionally, referring to fig. 6, fig. 6 is a detailed flowchart of step S420 according to an embodiment of the present disclosure, and step S420 may further include steps S421 to S424.
Step S421, calculating motion trajectory data of each first capture pose.
When the first grabbing pose set is subjected to secondary fine screening, due to the fact that the length of the robot arm is different from the situation of the obstacle in the scene, in order to guarantee that the grabbing tail end of the robot can reach the corresponding grabbing pose to grab when grabbing, the motion trajectory data of the mechanical arm when grabbing based on each first grabbing pose can be calculated based on a kinematic algorithm.
Alternatively, the kinematic algorithm may be a kinematic inverse solution algorithm of the robot or the like.
Step S422, according to the motion trail data of each first grabbing pose, a plurality of second grabbing poses are determined from the first grabbing pose set.
And screening the first grabbing pose data set, and judging whether the grabbing tail end can reach the corresponding grabbing pose according to the motion trail data so as to determine the first grabbing pose which can be reached by the grabbing tail end of the robot as a second grabbing pose.
Step S423, determining the capturing success rate of each second capturing pose according to the historical pose database in the pose model.
In order to effectively improve the success rate of the robot in grabbing, after a plurality of feasible and reachable second grabbing poses are obtained, the historical success rate of the historical grabbing poses similar or identical to each second grabbing pose can be obtained by continuously learning and training the pose model with the historical success rate to obtain the success rate of the corresponding second grabbing poses when the same or similar articles are grabbed based on a historical pose database in the pose model, for example, reference data containing 190 cluttered complex scenes are intensively obtained under the same or similar scenes.
And step S424, determining the target grabbing pose with the grabbing success rate reaching the success threshold.
One or more grabbing poses with success rate reaching a success threshold value are obtained in the plurality of second grabbing poses through comparison and serve as grabbing poses with high reliability, an object feasible two-finger grabbing pose sequence can be generated through the pose model, the sequence carries out descending arrangement on the success rate of all grabbing poses with high reliability, and therefore the grabbing pose with the highest success rate serves as the final target grabbing pose.
In the embodiment shown in fig. 6, the feasibility, accessibility and success rate of the target grabbing pose are effectively ensured through the second-level screening of accessibility and success rate.
Optionally, referring to fig. 7, fig. 7 is a detailed flowchart of step S200 according to an embodiment of the present application, and step S200 may further include steps S210 to S230.
And step S210, training according to the detection model parameters and the target object through the detection model to obtain a target detection model.
The target object can be learned and trained by inputting the parameters of the detection model and the related parameters of the target object into the detection model, and the target detection model can be obtained by training based on a target detection algorithm.
In step S220, a global image of the captured scene is acquired based on the second camera.
The second camera is a camera for overall shooting of the captured scene, the calibration mode of the second camera is a mode of 'eyes outside the hands', and the second camera can be arranged in a fixed position to carry out overall shooting of the captured scene. The relative position of the coordinate system of the second camera and the coordinate system of the robot is unchanged.
Alternatively, the global image captured in the second camera may be obtained through a wireless connection or a wired connection with the second camera, for example, through bluetooth, a wired network, a wireless network, or the like. The obtained global image is an image in an RGB color mode, and the global image comprises an image of a target object.
Alternatively, the first and second cameras may be various types of cameras or cameras, such as RealSense D400 series depth cameras, which are easy to set up and carry, capable of applying depth sensing to the device, capable of capturing indoor or outdoor environments, capable of long-range functionality, and high depth resolution.
Step S230, determining the position information and the object information corresponding to the target object according to the global image through the target detection model, and using the position information and the object information as the target data.
The global image is input into a target detection model for training and learning, and the target detection model can deduce and predict an object frame and an object label of a target object in the global image, so that position information and object information corresponding to the target object are obtained and used as target data obtained through training.
Optionally, after the target data is obtained through training, a grabbing position of the robot when the robot grabs the target object can be determined according to the position information; the robot is driven to move to the grabbing position, so that the robot can grab the target object in the grabbing position conveniently.
In the embodiment shown in fig. 7, teaching training for capturing any objects of various types and different positions can be performed, so that the universality of capturing targets is improved, the use range of teaching is expanded, and the accuracy and pertinence of target data are improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a robot gripping device according to an embodiment of the present disclosure, the robot gripping device 500 may include:
an object training module 510, configured to train a target object captured by a robot through a detection model, and determine target data of the target object, where the target data includes position information and object information of the target object;
a pose determining module 520, configured to determine, according to the target data, an initial grabbing pose set when the robot grabs the target object through a pose model;
and a pose screening module 530, configured to screen the initial grabbing pose set and obtain a target grabbing pose, so that the robot grabs the target object according to the target grabbing pose.
In an optional embodiment, the pose determination module 520 may further include an acquisition sub-module and a processing sub-module;
the acquisition sub-module is used for acquiring a local image of the target object shot by a first camera and a depth image which is registered with the local image, wherein the first camera is a camera which is shot locally and corresponds to the grabbing tail end of the robot;
and the processing submodule is used for processing the local image and the depth image according to the target data through the pose model to obtain a plurality of initial grabbing poses when the grabbing tail end grabs the target object, and the plurality of initial grabbing poses are used as the initial grabbing pose set.
In an optional embodiment, the pose filtering module 530 may further include a first-level filtering submodule and a second-level filtering submodule;
the first-stage screening submodule is used for performing first-stage screening on the initial grabbing pose set to obtain a first grabbing pose set meeting grabbing requirements;
and the secondary screening submodule is used for carrying out secondary screening on the first grabbing pose set to obtain the target grabbing pose of which the grabbing success rate reaches a success threshold value.
In an optional embodiment, the first-level screening submodule may further include a vector unit, a requirement unit, and a first screening unit;
the vector unit is used for acquiring vector data corresponding to each initial grabbing pose in the initial grabbing pose set;
a request unit, configured to determine the grabbing request corresponding to the target object based on the position information and the object information;
a first screening unit, configured to screen each vector data based on the grabbing requirement, use the initial grabbing pose at which the vector data meets the grabbing requirement as a first grabbing pose, and use a plurality of the first grabbing poses as the first grabbing pose set.
In an optional embodiment, the second-level screening submodule may further include a track unit and a second screening unit;
the track unit is used for calculating motion track data of each first grabbing pose; according to the motion trail data of each first grabbing pose, determining a plurality of second grabbing poses from the first grabbing pose set;
the second screening unit is used for determining the grabbing success rate of each second grabbing pose according to a historical pose database in the pose model; and determining the target grabbing pose with the grabbing success rate reaching the success threshold.
In an optional embodiment, the object training module 510 may further include a training sub-module, an image sub-module, and a determination sub-module;
the training submodule is used for training according to the detection model parameters and the target object through the detection model to obtain a target detection model;
the image submodule is used for acquiring a global image of a captured scene based on a second camera, wherein the second camera is a camera for shooting the captured scene globally;
and the determining submodule is used for determining the position information and the object information corresponding to the target object according to the global image through the target detection model, and taking the position information and the object information as the target data.
In an optional embodiment, the robot gripping device 500 may further include a moving module, configured to determine, according to the position information, a gripping position at which the robot grips the target object; and driving the robot to move to the grabbing position.
In an alternative embodiment, the robot gripping device 500 may further include a tracking module and a placing module;
the tracking module is used for tracking the motion path of the target object through the detection model to obtain a target track;
and the placing module is used for driving the robot to place the grabbed target object based on the target track.
Since the principle of the robot gripping device 500 in the embodiment of the present application for solving the problem is similar to that of the embodiment of the robot gripping method, the implementation of the robot gripping device 500 in the embodiment of the present application can refer to the description in the embodiment of the above method, and repeated descriptions are omitted.
The embodiment of the present application further provides a computer-readable storage medium, where computer program instructions are stored in the computer-readable storage medium, and when the computer program instructions are read and executed by a processor, the steps in any one of the robot grasping methods provided in the embodiment are executed.
In summary, the embodiments of the present application provide a robot grabbing method, a robot grabbing device, an electronic device, and a readable storage medium, which train a grabbed object and a grabbing pose respectively through a plurality of deep learning neural network models, screen a plurality of determined poses, and teach grabbing of a robot by determining a target grabbing pose with a high success rate during grabbing. The robot can be used for teaching the grabbing of objects of various different types and positions, the teaching universality is improved, the teaching application range is expanded, and the success rate of the robot in grabbing and sorting according to the teaching is improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. The apparatus embodiments described above are merely illustrative, and for example, the block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices according to various embodiments of the present application. In this regard, each block in the block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams, and combinations of blocks in the block diagrams, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Therefore, the present embodiment further provides a readable storage medium, in which computer program instructions are stored, and when the computer program instructions are read and executed by a processor, the computer program instructions perform the steps of any of the block data storage methods. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a RanDom Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (11)

1. A robotic grasping method, the method comprising:
training a target object grabbed by a robot through a detection model, and determining target data of the target object, wherein the target data comprises position information and object information of the target object;
determining an initial grabbing pose set when the robot grabs the target object according to the target data through a pose model;
and screening the initial grabbing pose set to obtain a target grabbing pose, so that the robot can grab the target object according to the target grabbing pose.
2. The method of claim 1, wherein determining an initial set of gripping poses for the robot to grip the target object from the target data via a pose model comprises:
acquiring a local image of the target object shot by a first camera and a depth image registered with the local image, wherein the first camera is a camera which is used for local shooting and corresponds to a grabbing tail end of the robot;
and processing the local image and the depth image according to the target data through the pose model to obtain a plurality of initial grabbing poses when the grabbing tail end grabs the target object, and taking the plurality of initial grabbing poses as the initial grabbing pose set.
3. The method of claim 1, wherein the screening the set of initial grabbing poses to obtain target grabbing poses comprises:
performing primary screening on the initial grabbing pose set to obtain a first grabbing pose set meeting grabbing requirements;
and performing secondary screening on the first grabbing pose set to obtain the target grabbing pose with the grabbing success rate reaching a success threshold value.
4. The method of claim 3, wherein the performing a first level of screening on the initial set of grab positions to obtain a first set of grab positions that satisfy grab requirements comprises:
acquiring vector data corresponding to each initial grabbing pose in the initial grabbing pose set;
determining the grabbing requirement corresponding to the target object based on the position information and the object information;
and screening each vector data based on the grabbing requirements, taking the initial grabbing pose of which the vector data meets the grabbing requirements as a first grabbing pose, and taking a plurality of first grabbing poses as a first grabbing pose set.
5. The method of claim 3, wherein the second-level screening of the first set of grab poses to obtain the target grab pose with a grab success rate reaching a success threshold comprises:
calculating motion trail data of each first grabbing pose;
according to the motion trail data of each first grabbing pose, determining a plurality of second grabbing poses from the first grabbing pose set;
determining the grabbing success rate of each second grabbing pose according to a historical pose database in the pose model;
and determining the target grabbing pose with the grabbing success rate reaching the success threshold.
6. The method of claim 1, wherein the training of the target object grabbed by the robot through the detection model to determine the target data of the target object comprises:
training according to the detection model parameters and the target object through the detection model to obtain a target detection model;
acquiring a global image of a captured scene based on a second camera, wherein the second camera is a camera for shooting the captured scene globally;
and determining the position information and the object information corresponding to the target object according to the global image through the target detection model, and taking the position information and the object information as the target data.
7. The method of claim 1, wherein after the target object grasped by the robot is trained by the detection model and the target data of the target object is determined, the method further comprises:
determining a grabbing position of the robot when the robot grabs the target object according to the position information;
and driving the robot to move to the grabbing position.
8. The method of claim 1, further comprising:
tracking the motion path of the target object through the detection model to obtain a target track;
and driving the robot to place the grabbed target object based on the target track.
9. A robotic grasping device, characterized in that the device comprises:
the object training module is used for training a target object grabbed by the robot through a detection model and determining target data of the target object, wherein the target data comprise position information and object information of the target object;
the pose determining module is used for determining an initial grabbing pose set when the robot grabs the target object according to the target data through a pose model;
and the pose screening module is used for screening the initial grabbing pose set and acquiring a target grabbing pose so that the robot can grab the target object according to the target grabbing pose.
10. An electronic device comprising a memory having stored therein program instructions and a processor that, when executed, performs the steps of the method of any of claims 1-8.
11. A computer-readable storage medium having computer program instructions stored thereon for execution by a processor to perform the steps of the method of any of claims 1-8.
CN202210335769.3A 2022-03-31 2022-03-31 Robot grabbing method and device, electronic equipment and readable storage medium Pending CN114683251A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210335769.3A CN114683251A (en) 2022-03-31 2022-03-31 Robot grabbing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210335769.3A CN114683251A (en) 2022-03-31 2022-03-31 Robot grabbing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114683251A true CN114683251A (en) 2022-07-01

Family

ID=82140514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210335769.3A Pending CN114683251A (en) 2022-03-31 2022-03-31 Robot grabbing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114683251A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115026836A (en) * 2022-07-21 2022-09-09 深圳市华成工业控制股份有限公司 Control method, device and equipment of five-axis manipulator and storage medium
CN117817666A (en) * 2024-01-25 2024-04-05 深圳市桃子自动化科技有限公司 Industrial robot intelligence centre gripping control system based on artificial intelligence

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510062A (en) * 2018-03-29 2018-09-07 东南大学 A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network
WO2018193130A1 (en) * 2017-04-21 2018-10-25 Roboception Gmbh Method for creating a database of gripper poses, method for controlling a robot, computer-readable storage medium and materials handling system
CN109483554A (en) * 2019-01-22 2019-03-19 清华大学 Robotic Dynamic grasping means and system based on global and local vision semanteme
CN109658413A (en) * 2018-12-12 2019-04-19 深圳前海达闼云端智能科技有限公司 A kind of method of robot target grasping body position detection
CN110363815A (en) * 2019-05-05 2019-10-22 东南大学 The robot that Case-based Reasoning is divided under a kind of haplopia angle point cloud grabs detection method
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN113232019A (en) * 2021-05-13 2021-08-10 中国联合网络通信集团有限公司 Mechanical arm control method and device, electronic equipment and storage medium
CN113787521A (en) * 2021-09-24 2021-12-14 上海微电机研究所(中国电子科技集团公司第二十一研究所) Robot grabbing method, system, medium and electronic device based on deep learning
CN113888631A (en) * 2021-08-31 2022-01-04 华南理工大学 Designated object grabbing method based on target cutting area

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018193130A1 (en) * 2017-04-21 2018-10-25 Roboception Gmbh Method for creating a database of gripper poses, method for controlling a robot, computer-readable storage medium and materials handling system
CN108510062A (en) * 2018-03-29 2018-09-07 东南大学 A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network
CN109658413A (en) * 2018-12-12 2019-04-19 深圳前海达闼云端智能科技有限公司 A kind of method of robot target grasping body position detection
CN109483554A (en) * 2019-01-22 2019-03-19 清华大学 Robotic Dynamic grasping means and system based on global and local vision semanteme
CN110363815A (en) * 2019-05-05 2019-10-22 东南大学 The robot that Case-based Reasoning is divided under a kind of haplopia angle point cloud grabs detection method
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN113232019A (en) * 2021-05-13 2021-08-10 中国联合网络通信集团有限公司 Mechanical arm control method and device, electronic equipment and storage medium
CN113888631A (en) * 2021-08-31 2022-01-04 华南理工大学 Designated object grabbing method based on target cutting area
CN113787521A (en) * 2021-09-24 2021-12-14 上海微电机研究所(中国电子科技集团公司第二十一研究所) Robot grabbing method, system, medium and electronic device based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115026836A (en) * 2022-07-21 2022-09-09 深圳市华成工业控制股份有限公司 Control method, device and equipment of five-axis manipulator and storage medium
CN117817666A (en) * 2024-01-25 2024-04-05 深圳市桃子自动化科技有限公司 Industrial robot intelligence centre gripping control system based on artificial intelligence

Similar Documents

Publication Publication Date Title
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
Ciocarlie et al. Towards reliable grasping and manipulation in household environments
Sahbani et al. An overview of 3D object grasp synthesis algorithms
Alonso et al. Current research trends in robot grasping and bin picking
CN114683251A (en) Robot grabbing method and device, electronic equipment and readable storage medium
WO2019080228A1 (en) Robot object-grasping control method and apparatus
Suzuki et al. Grasping of unknown objects on a planar surface using a single depth image
EA038279B1 (en) Method and system for grasping an object by means of a robotic device
Raessa et al. Teaching a robot to use electric tools with regrasp planning
Lengare et al. Human hand tracking using MATLAB to control Arduino based robotic arm
Omrčen et al. Autonomous acquisition of pushing actions to support object grasping with a humanoid robot
CN114746906A (en) Shared dense network with robot task specific headers
US10933526B2 (en) Method and robotic system for manipulating instruments
Mathur et al. A review of pick and place operation using computer vision and ros
EP4401930A1 (en) Learning from demonstration for determining robot perception motion
CN211890823U (en) Four-degree-of-freedom mechanical arm vision servo control system based on RealSense camera
Marín et al. A predictive interface based on virtual and augmented reality for task specification in a Web telerobotic system
Tudico et al. Improving and benchmarking motion planning for a mobile manipulator operating in unstructured environments
Takamido et al. Learning robot motion in a cluttered environment using unreliable human skeleton data collected by a single RGB camera
Infantino et al. Visual control of a robotic hand
Kyprianou et al. Bin-picking in the industry 4.0 era
Kusano et al. FCN-Based 6D Robotic Grasping for Arbitrary Placed Objects
US11712797B2 (en) Dual hand detection in teaching from demonstration
EP4386671A2 (en) Depth-based 3d human pose detection and tracking
WO2023100282A1 (en) Data generation system, model generation system, estimation system, trained model production method, robot control system, data generation method, and data generation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Building 6, 646 Jianchuan Road, Minhang District, Shanghai 201100

Applicant after: Jieka Robot Co.,Ltd.

Address before: Building 6, 646 Jianchuan Road, Minhang District, Shanghai 201100

Applicant before: SHANGHAI JAKA ROBOTICS Ltd.

CB02 Change of applicant information