CN115446835B - Deep learning-based rigid and soft humanoid hand autonomous grabbing method - Google Patents

Deep learning-based rigid and soft humanoid hand autonomous grabbing method Download PDF

Info

Publication number
CN115446835B
CN115446835B CN202211077521.8A CN202211077521A CN115446835B CN 115446835 B CN115446835 B CN 115446835B CN 202211077521 A CN202211077521 A CN 202211077521A CN 115446835 B CN115446835 B CN 115446835B
Authority
CN
China
Prior art keywords
grabbing
soft
mode
training
humanoid hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211077521.8A
Other languages
Chinese (zh)
Other versions
CN115446835A (en
Inventor
杜宇
刘冬
吴敏杰
李泳耀
田小静
丛明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Dalian Jiaotong University
Original Assignee
Dalian University of Technology
Dalian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology, Dalian Jiaotong University filed Critical Dalian University of Technology
Priority to CN202211077521.8A priority Critical patent/CN115446835B/en
Publication of CN115446835A publication Critical patent/CN115446835A/en
Application granted granted Critical
Publication of CN115446835B publication Critical patent/CN115446835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for automatically grabbing a rigid and soft humanoid hand based on deep learning, and belongs to the technical field of intelligent control of robots. The grabbing method comprises the following steps: acquiring an RGB image of an object using a depth camera; inputting RGB images into YOLOv target detection algorithm based on the deep neural network model, and outputting a grabbing mode and grabbing areas of the object; inputting the RGB image into an image processing method based on OpenCV, and outputting the grabbing angle of the object; and controlling the rigid and soft humanoid hand to grasp the object according to the grasping mode, the grasping area and the grasping angle. According to the invention, the grasping mode prediction and grasping pose estimation can be realized at the same time, so that complex grasping planning is avoided and slight contact between the soft humanoid hand and the tabletop is allowed; the accurate control of the soft humanoid hand can be realized, so that the soft humanoid hand can accurately and forcefully grasp objects.

Description

Deep learning-based rigid and soft humanoid hand autonomous grabbing method
Technical Field
The invention belongs to the technical field of intelligent control of robots, and relates to a rigid and soft humanoid hand autonomous grabbing method based on deep learning.
Background
As the fine degree of the job task is higher and higher, the requirement for the dexterous hand is improved accordingly, realize the steady reliable snatch of the dexterous hand is the main challenge in the robot field. Compared with a two-finger gripper, the underactuated dexterous hand has better gripping performance and certain operation performance. Compared with a full-drive dexterous hand, the number of the driving units of the underdrive dexterous hand is smaller than the degree of freedom, so that the underdrive dexterous hand can grasp objects with different shapes by a simple control method. Under-actuated hands are easy to realize strong grabbing, but many research difficulties exist in modeling analysis, accurate pinching, operation and other problems.
The data-driven machine learning approach is very popular in the robot gripping direction and achieves positive results. Related technicians make RGB-D data sets of object grabbing modes, and directly establish mapping from object images to four important grabbing modes through a depth network, so that the task of picking up various daily objects by extending hands of artificial limb dexterous hands is realized. However, the grabbing and positioning of the object are judged by a person. The related scheme uses RGB images of an object grabbing scene as input, predicts nine human grabbing action primitives by training a deep neural network, and utilizes a touch sensor to complete the proximity positioning and grabbing of a mechanical arm and a five-finger soft hand. Related art for a 24 degree of freedom Shadow dexterous hand, a three-dimensional model of a complete object is obtained with vision using GraspIt-! The method is based on an object model, and the gripping generalization capability is required to be improved.
As can be seen, the problems in the related art are: in the technical scheme in the related art, the grabbing mode prediction and grabbing pose estimation cannot be realized.
Disclosure of Invention
The invention solves the problems that: in the technical scheme in the related art, the grabbing mode prediction and grabbing pose estimation cannot be realized.
In order to solve the problems, the invention adopts the following technical scheme:
The method is based on complementarity of a deep learning method and underactuated self-adaptive grabbing, and utilizes a deep learning network to learn classification of different object grabbing modes, and through object detection and image processing, the object grabbing modes, grabbing areas and grabbing angles are identified, so that grabbing planning and control of the imitation hands are simplified; the method is realized based on an acquisition module, a first control module, a second control module and a grabbing module, and specifically comprises the following steps.
Firstly, acquiring RGB images of an object by using a depth camera, and training the model specifically:
1.1 The acquisition module is used for acquiring RGB images of the object by using the depth camera and establishing a data set; the dataset can be established to train YOLOv target detection algorithm effectively, so that the object grabbing mode requirements can be met.
1.2 The data set is divided into a test set and a training set, the training set is used for training YOLOv the target detection algorithm to identify the grabbing mode, and the test set is used for effectively detecting YOLOv training fruits of the target detection algorithm. The RGB image is input into YOLOv target detection algorithm based on the deep neural network model through the first control module, and the grabbing mode and grabbing area of the object are output.
The specific process of the training set YOLOv and the target detection algorithm in the step 1.2) for identifying the grabbing mode is as follows;
The hand snatchs the mode and abundantly is various, mainly can divide into strong snatch and accurate snatch two kinds, and strong snatch refers to the form that finger and palm formed envelope object jointly at snatch in-process, including spherical grasp, cylinder grasp and hook-shaped grasp, snatch comparatively firmly. The accurate grabbing means that the finger alone completes pinching of the object, including finger tip pinching, three-point pinching and lateral pinching, and grabbing is flexible. These six grip patterns constitute the vast majority of grip gestures used by a person in daily activities. This patent divides into four kinds of snatchs the mode with the object, is cylindricality envelope, spherical envelope, meticulous pinching and wide type pinching respectively. The cylindrical envelope and the spherical envelope belong to strong grabbing, are suitable for grabbing thicker objects, and correspond to different swing angles of the thumb due to the difference of the cylindrical shape and the spherical shape. The fine pinching and the wide pinching belong to accurate grabbing, are suitable for grabbing thinner objects, and are different in opening and closing width corresponding to the thumb and the other four fingers due to the difference of the widths of the objects. And after the grabbing mode is determined, training is carried out by combining a target detection algorithm.
The object grabbing mode is divided while the shape and the size of the object are considered, and specific dividing parameters include the thickness and the width of the object. When the thickness of the object is smaller than 30mm and the width is smaller than 30mm, the object is finely pinched; when the thickness of the object is smaller than 30mm and the width is larger than 30mm, the object is pinched in a wide mode; when the thickness of the object is more than 30mm and the shape deviates from the column shape, the object belongs to a column envelope; when the object is thicker than 30mm and the shape deviates from a sphere, the object belongs to the sphere envelope.
Training is divided into two parts: firstly, extracting the checked picture features by using a convolutional neural network, and outputting a feature map; dividing an RGB image obtained originally into small squares, respectively generating a series of anchor frames by taking each small square as a center, generating a prediction frame by taking the anchor frames as a base, setting the middle point of the anchor frame as a grabbing point, and marking the positions and the categories of the real frames of the object according to the positions of the real frames of the object. And finally, establishing the association between the output feature map and the predicted frame label, creating a loss function, completing training, and obtaining a corresponding grabbing mode and grabbing area according to the initially set standard.
The patent tests YOLOv object detection algorithm after the test training is completed in the first step, and can identify the grabbing mode and grabbing area of the object. In order to verify the reliability and adaptability of the algorithm, in addition to detecting some known objects in the test set, some unknown object pictures are taken for testing.
And secondly, inputting the RGB image obtained in the first step into an OpenCV software library by using a second control module, processing the RGB image based on an OpenCV self-owned image processing method, and outputting the grabbing angle of the object.
1) Adjusting a detection threshold value in the Canny operator to realize edge detection of the object;
2) Filling the outline shape of the object completely by using a corrosion and expansion function;
3) Surrounding the outer contour of the object by using a minimum circumscribed rectangle function to obtain a circumscribed rectangle;
4) And distinguishing the long side and the short side of the circumscribed rectangle, and outputting the rectangular rotation angle grabbed by the long side.
Thirdly, the grabbing module grabs the object by controlling the soft humanoid hand through the motor according to the grabbing angle acquired in the third step, the grabbing mode and the grabbing area acquired in the second step, and the method is specific:
3.1 Initializing the motor under the condition of no load of the designed soft humanoid hand, and setting an initial position mark of the motor;
3.2 According to the grabbing area and the grabbing angle, controlling the mechanical arm connected with the just soft humanoid hand to move to a preparation position 20 cm above the grabbing point;
3.3 Controlling the rotation number of turns of the motor to enable the fingers of the just soft humanoid hand to reach the pre-grabbing position;
3.4 Controlling the rigid and soft humanoid hand to make a pre-grabbing mode configuration according to the grabbing mode, for example, when grabbing a certain object, the hand needs to rotate 90 degrees, and at the moment, the rigid and soft humanoid hand needs to rotate 30 degrees in advance to realize the pre-grabbing mode configuration;
3.5 According to the grabbing height, controlling the mechanical arm to vertically move downwards to enable the grabbing end point of the rigid and soft imitation human hand to reach the desktop;
3.6 The soft humanoid hand completes self-adaptive grabbing and keeps grabbing, and the mechanical arm moves upwards to grab the object; wherein the grabbing height is obtained by a depth camera.
The beneficial effects of the invention are as follows:
(1) The end-to-end grabbing mode identification is realized through deep learning, namely grabbing gestures and grabbing modes of objects corresponding to the grabbing gestures are identified, and grabbing positions and grabbing angles of the objects are obtained. The main advantages are: compared with other rigid and soft humanoid hand grabbing methods, the method simultaneously realizes grabbing mode prediction and grabbing pose estimation, avoids complex grabbing planning and allows the rigid and soft humanoid hand to slightly contact with a tabletop.
(2) The data set is trained and tested by adopting YOLOv target detection algorithm, YOLOv is a deep convolutional neural network for realizing regression function, and compared with the feature extraction of candidate regions in the FAST R-CNN target detection model, YOLOv selects the global region of the picture for training, so that the speed is increased, and the target and the background can be better distinguished. The method is mainly improved in that multi-scale prediction is adopted, and a better basic classification network and classifier are adopted, so that the method has the characteristics of strong universality and low background false detection rate. The image processing method based on OpenCV can accurately and rapidly output the grabbing angle of the object.
(3) Based on complementarity of the deep learning method in the grabbing of the soft humanoid hand, the grabbing mode classification and grabbing positioning are realized by establishing a data set and a target detection algorithm, and the grabbing mode recognition accuracy of the detection algorithm after training reaches 98.7% for the known object and 82.7% for the unknown object. The self-adaptability of the grabbing of the soft imitation human hand compensates uncertainty of a learning algorithm to a certain extent, and the grabbing planning is simplified. The under-actuated dexterous hand is carried on the UR3e mechanical arm to carry out a grabbing experiment on the known object and the unknown object, different grabbing modes are adopted for the objects with different shapes and sizes, the grabbing success rate of 90.8% is achieved, and the self-grabbing method based on the grabbing mode identification for the rigid and soft simulated human hand has practicability.
(4) The accurate control of the soft humanoid hand can be realized, so that the soft humanoid hand can accurately and forcefully grasp objects.
Drawings
Fig. 1 is a step flowchart of a deep learning-based soft and hard humanoid hand autonomous grabbing method in an embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
[ First embodiment ]
The embodiment provides a method for automatically grabbing a rigid and soft humanoid hand based on deep learning, which comprises the following steps:
S100: according to the scheme, object grabbing on a plane is considered, and the object is divided into 4 grabbing modes: cylindrical envelope, spherical envelope, fine pinching, and wide pinching. According to the scheme, object grabbing on a plane is considered, and the object is divided into 4 grabbing modes: cylindrical envelope, spherical envelope, fine pinching, and wide pinching.
S101: establishing a data set for the identification of the grabbing mode;
In this embodiment, a deep learning algorithm is used to identify the object capture mode, and training and verification on a suitable data set is required. The present solution creates a data set for object grabbing pattern recognition.
The data set selects 80 common daily objects, wherein 17 objects belong to cylindrical envelopes, such as pop cans, water bottles and the like; 22 objects belong to spherical envelopes, such as tennis balls, apples and the like; 14 objects belong to fine pinching, such as sign pens, glue sticks and the like; there are 27 objects belonging to a wide pinching, such as glasses case, mouse, etc. The object grabbing mode is divided by considering the shape and the size of the object, and specific dividing parameters include the thickness and the width of the object. When the thickness of the object is smaller than 30cm and the width is smaller than 30cm, the object is finely pinched; when the thickness of the object is smaller than 30cm and the width is larger than 30cm, the object is pinched in a wide mode; when the thickness of the object is more than 30cm and the shape deviates from the column shape, the object belongs to a column envelope; when the object is thicker than 30cm and the shape deviates from a sphere, the object belongs to the sphere envelope.
The Kinect v2 depth camera is fixed above the grabbing platform, and RGB pictures of the single object are shot and stored. The objects are placed on the platform at random in different positions and different rotation directions, 16 pictures are shot by each object, 16 pictures are shot by individual objects in a transverse posture besides being placed horizontally, and 1344 pictures are shot. And finally, carrying out capture mode and capture area labeling on the picture by using LabelImg software. The cylindrical envelope is labeled with a category of "power1", the spherical envelope is labeled with a category of "power2", the fine pinch is labeled with a category of "precision1", and the broad pinch is labeled with a category of "precision2". The grabbing area is marked by using a horizontal rectangular frame, the center of the rectangular frame is marked to be approximately coincident with the gravity center of the object, and the outline of the object is surrounded by the frame of the rectangular frame as far as possible.
S102: dividing the data set into a test set and a training set;
Before training, 241 pictures are randomly selected from the 1344 picture data set to serve as a test set, and the rest pictures serve as training sets. After 1000 times of training, the whole accuracy of the identification reaches 98.7%, wherein the accuracy of the cylindrical envelope (power 1) is 99.5%, the accuracy of the spherical envelope (power 2) is 99.5%, the accuracy of the fine pinching (precision 1) is 96.6%, and the accuracy of the wide pinching (precision 2) is 99.3%. In addition, some unknown object pictures are shot for testing, and the method has good detection effect, and the identification accuracy rate of 24 unknown objects is 82.75%.
It will be appreciated that the training and testing of the YOLOv target detection algorithm, respectively, by dividing the data set into a test set and a training set, can effectively detect the training consequences of the YOLOv target detection algorithm.
S103: training YOLOv a target detection algorithm by using a training set to identify a grabbing mode;
s104: testing YOLOv the accuracy of the grabbing mode identification by using a test set and a target detection algorithm;
S105: acquiring an RGB image of an object using a depth camera;
s106: inputting RGB images into YOLOv target detection algorithm based on the deep neural network model, and outputting a grabbing mode and grabbing areas of the object;
According to the scheme, the detection threshold value in the Canny operator is required to be adjusted to realize edge detection of the object, the outline shape of the object is filled completely by using the corrosion and expansion functions, the outer outline of the object is surrounded by using the minimum circumscribed rectangle function, the circumscribed rectangle is obtained, the long side and the short side of the circumscribed rectangle are distinguished, and the rectangular rotation angle for grabbing the long side is output. It should be noted that the present embodiment is applicable to daily objects with different colors and shapes.
S107: controlling the rigid and soft humanoid hand to grasp the object according to the grasping mode, the grasping area and the grasping angle;
according to the scheme, the rotation number of turns of the motor is controlled according to the grabbing area and the grabbing angle, so that the fingers of the soft humanoid hand reach the pre-grabbing position; the bending speed of the fingers is controlled, and coordinated grabbing actions are realized. And initializing under the condition of no load of the soft humanoid hand, and setting an initial position mark of the motor. And controlling the mechanical arm connected with the soft humanoid hand to move to a preparation position right above the grabbing point by a certain distance. And the current control is adopted to realize the braking of the fingers, and when the working current value of the motor exceeds a set threshold value, the motor stops rotating. And controlling the rigid and soft humanoid hand to make a pre-grabbing mode configuration according to the grabbing mode. And controlling the mechanical arm to move vertically downwards according to the grabbing height so that the grabbing end point of the soft imitation human hand reaches the desktop.
S108: the soft humanoid hand completes self-adaptive grabbing and keeps grabbing, and the mechanical arm moves upwards to grab the object. Wherein the grabbing height is obtained by a depth camera.
In the scheme, the grabbing end point of the soft imitation hand is defined as a junction point of natural grabbing of the middle finger and the finger tip of the thumb in an accurate pinching mode. And finally, combining the coordinates of the grabbing end points of the dexterous hand, taking error compensation into consideration, and converting to obtain the x, y and z coordinates of the final grabbing position.
It can be understood that the scheme can realize the accurate control of the soft humanoid hand, so that the soft humanoid hand can accurately and forcefully grasp objects. The scheme realizes the classification of the grabbing mode and the grabbing positioning by establishing an object grabbing mode data set and a target detection algorithm based on the complementarity of the deep learning method in the grabbing of the soft humanoid hand, and the grabbing mode identification accuracy of the detection algorithm after training reaches 98.7% for the known object and 82.7% for the unknown object. The self-adaptability of the grabbing of the soft imitation human hand compensates uncertainty of a learning algorithm to a certain extent, and the grabbing planning is simplified. The under-actuated dexterous hand is carried on the UR3e mechanical arm to carry out a grabbing experiment on the known object and the unknown object, different grabbing modes are adopted for the objects with different shapes and sizes, the grabbing success rate of 90.8% is achieved, and the self-grabbing method based on the grabbing mode identification for the rigid and soft simulated human hand has practicability.
The present scheme provides a just soft humanoid hand, and it includes: the processor and the memory readable storage medium store a program or instructions which when executed by the processor realize the steps of the soft imitation human hand autonomous grabbing method based on deep learning according to any embodiment of the invention. And a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the deep learning-based soft humanoid hand autonomous grasping method according to any of the embodiments of the present invention.

Claims (2)

1. The method is characterized in that the method utilizes a deep learning network to learn classification of different object grabbing modes based on complementarity of the deep learning method and underactuated self-adaptive grabbing, and identifies the object grabbing mode, grabbing area and grabbing angle through object detection and image processing, so as to simplify grabbing planning and control of the humanoid hand; the method is realized based on an acquisition module, a first control module, a second control module and a grabbing module and comprises the following steps:
firstly, acquiring RGB images of an object by using a depth camera, and training the model specifically:
The acquisition module is used for acquiring RGB images of the object by using the depth camera and establishing a data set; dividing the data set into a test set and a training set, wherein the training set is used for training YOLOv the target detection algorithm to identify a grabbing mode, and the test set is used for effectively detecting the training fruits of the YOLOv target detection algorithm; inputting the RGB image into YOLOv target detection algorithm based on the deep neural network model through a first control module, and outputting a grabbing mode and grabbing area of the object;
A second step of inputting the RGB image obtained in the first step into an OpenCV software library by using a second control module, processing the RGB image based on an OpenCV image processing method, and outputting a grabbing angle of an object;
Thirdly, the grabbing module grabs the object by controlling the soft humanoid hand through the motor according to the grabbing angle acquired in the third step, the grabbing mode and the grabbing area acquired in the second step, and the method is specific:
3.1 Initializing the motor under the condition of no load of the designed soft humanoid hand, and setting an initial position mark of the motor;
3.2 According to the grabbing area and the grabbing angle, controlling the mechanical arm connected with the just soft humanoid hand to move to a preparation position 20 cm above the grabbing point;
3.3 Controlling the rotation number of turns of the motor to enable the fingers of the just soft humanoid hand to reach the pre-grabbing position;
3.4 Controlling the rigid soft humanoid hand to make a pre-grabbing mode configuration according to the grabbing mode;
3.5 According to the grabbing height, controlling the mechanical arm to vertically move downwards to enable the grabbing end point of the rigid and soft imitation human hand to reach the desktop;
3.6 The soft humanoid hand completes self-adaptive grabbing and keeps grabbing, and the mechanical arm moves upwards to grab the object; the grabbing height is obtained through a depth camera;
In the first step, training YOLOv the training set by using the target detection algorithm to identify the grabbing mode comprises the following specific processes;
Dividing an object into four grabbing modes, namely cylindrical enveloping, spherical enveloping, fine pinching and wide pinching; and is combined with
Training by combining a target detection algorithm;
Training is divided into two parts: firstly, extracting the checked picture features by using a convolutional neural network, and outputting a feature map; dividing an RGB image obtained originally into small squares, respectively generating a series of anchor frames by taking each small square as a center, generating a prediction frame by taking the anchor frames as a basis, setting the middle points of the anchor frames as grabbing points, and marking the positions and the categories of the real frames of the object according to the positions of the real frames of the object; finally, establishing the association between the output feature map and the predicted frame label, creating a loss function, completing training, and obtaining a corresponding grabbing mode and grabbing area according to the initially set standard;
And testing YOLOv object detection algorithm after the test training in the first step, and identifying the grabbing mode and grabbing area of the object.
2. The method for automatically grabbing a soft humanoid hand based on deep learning according to claim 1, wherein the object grabbing mode is divided while considering the shape and the size of the object, and specific dividing parameters include the thickness and the width of the object; when the thickness of the object is smaller than 30mm and the width is smaller than 30mm, the object is finely pinched; when the thickness of the object is smaller than 30mm and the width is larger than 30mm, the object is pinched in a wide mode; when the thickness of the object is more than 30mm and the shape deviates from the column shape, the object belongs to a column envelope; when the object is thicker than 30mm and the shape deviates from a sphere, the object belongs to the sphere envelope.
CN202211077521.8A 2022-09-05 2022-09-05 Deep learning-based rigid and soft humanoid hand autonomous grabbing method Active CN115446835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211077521.8A CN115446835B (en) 2022-09-05 2022-09-05 Deep learning-based rigid and soft humanoid hand autonomous grabbing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211077521.8A CN115446835B (en) 2022-09-05 2022-09-05 Deep learning-based rigid and soft humanoid hand autonomous grabbing method

Publications (2)

Publication Number Publication Date
CN115446835A CN115446835A (en) 2022-12-09
CN115446835B true CN115446835B (en) 2024-06-14

Family

ID=84303809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211077521.8A Active CN115446835B (en) 2022-09-05 2022-09-05 Deep learning-based rigid and soft humanoid hand autonomous grabbing method

Country Status (1)

Country Link
CN (1) CN115446835B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116652940B (en) * 2023-05-19 2024-06-04 兰州大学 Human hand imitation precision control method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108972494A (en) * 2018-06-22 2018-12-11 华南理工大学 A kind of Apery manipulator crawl control system and its data processing method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7136554B2 (en) * 2017-12-18 2022-09-13 国立大学法人信州大学 Grasping device, learning device, program, grasping system, and learning method
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN113618709B (en) * 2021-07-07 2023-12-29 浙江大学 Multi-mode force control nondestructive grabbing device for intelligent production line
CN113537079A (en) * 2021-07-19 2021-10-22 江苏天楹机器人智能科技有限公司 Target image angle calculation method based on deep learning
CN113752255B (en) * 2021-08-24 2022-12-09 浙江工业大学 Mechanical arm six-degree-of-freedom real-time grabbing method based on deep reinforcement learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108972494A (en) * 2018-06-22 2018-12-11 华南理工大学 A kind of Apery manipulator crawl control system and its data processing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
结合深度学习的机械臂视觉抓取控制;白成超;晏卓;宋俊霖;;载人航天;20180615(第03期);全文 *

Also Published As

Publication number Publication date
CN115446835A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
Schmidt et al. Grasping of unknown objects using deep convolutional neural networks based on depth images
Morrison et al. Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach
Karaoguz et al. Object detection approach for robot grasp detection
Calandra et al. The feeling of success: Does touch sensing help predict grasp outcomes?
CN110900581B (en) Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
Eppner et al. Grasping unknown objects by exploiting shape adaptability and environmental constraints
Bekiroglu et al. Assessing grasp stability based on learning and haptic data
Morales et al. Integrated grasp planning and visual object localization for a humanoid robot with five-fingered hands
US8428311B2 (en) Capturing and recognizing hand postures using inner distance shape contexts
CN108171748A (en) A kind of visual identity of object manipulator intelligent grabbing application and localization method
Aleotti et al. Grasp recognition in virtual reality for robot pregrasp planning by demonstration
CN108972494A (en) A kind of Apery manipulator crawl control system and its data processing method
CN110355754A (en) Robot eye system, control method, equipment and storage medium
Liu et al. Robotic objects detection and grasping in clutter based on cascaded deep convolutional neural network
CN114097004A (en) Autonomous task performance based on visual embedding
Adjigble et al. Model-free and learning-free grasping by local contact moment matching
Yu et al. Robotic grasping of unknown objects using novel multilevel convolutional neural networks: From parallel gripper to dexterous hand
CN114952809A (en) Workpiece identification and pose detection method and system and grabbing control method of mechanical arm
CN115446835B (en) Deep learning-based rigid and soft humanoid hand autonomous grabbing method
Song et al. Learning optimal grasping posture of multi-fingered dexterous hands for unknown objects
CN117340929A (en) Flexible clamping jaw grabbing and disposing device and method based on three-dimensional point cloud data
CN114882113A (en) Five-finger mechanical dexterous hand grabbing and transferring method based on shape correspondence of similar objects
Chen et al. Robotic grasp control policy with target pre-detection based on deep Q-learning
Romero et al. Human-to-robot mapping of grasps
Sanchez-Lopez et al. A real-time 3D pose based visual servoing implementation for an autonomous mobile robot manipulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant