CN112372641B - Household service robot character grabbing method based on visual feedforward and visual feedback - Google Patents
Household service robot character grabbing method based on visual feedforward and visual feedback Download PDFInfo
- Publication number
- CN112372641B CN112372641B CN202010783580.1A CN202010783580A CN112372641B CN 112372641 B CN112372641 B CN 112372641B CN 202010783580 A CN202010783580 A CN 202010783580A CN 112372641 B CN112372641 B CN 112372641B
- Authority
- CN
- China
- Prior art keywords
- camera
- visual
- mechanical arm
- article
- tail end
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a household service robot article grabbing method based on visual feedforward and visual feedback. The invention relates to technologies such as article identification and positioning, article posture estimation, mechanical arm control and the like, which are mainly applied to a household service type robot to realize the function of grabbing articles by the robot. According to the method, the global camera is controlled to identify and roughly position the object so as to complete visual feedforward, the tail end camera of the mechanical arm is controlled to identify and precisely position the object so as to complete visual feedback, and the mechanical arm is controlled to accurately grasp the object according to the rough position and the precise position of the object. The method can meet the requirements of various object grabbing operation tasks of the home service robot in a home scene, and greatly improves the effectiveness and adaptability of the home service robot in executing tasks.
Description
Technical Field
The invention belongs to the technical field of hand-eye coordination control, and provides a grabbing operation method of a home service robot arm hand system based on combination of visual feedforward and visual feedback, which relates to technologies such as article identification positioning, article posture estimation and mechanical arm control.
Background
The service robot is a semi-autonomous or fully autonomous robot that can perform service work beneficial to human health. The service robot has wide application range and is mainly used for maintenance, repair, transportation, cleaning, security, rescue, supervision, welcome, delivery and other works. In the process of executing the tasks, the robot is required to grasp more or less articles, and the acquisition of the article information is required to be completed in the process of grasping operation so as to realize the article grasping task.
At present, two schemes of grabbing articles or preset positions of articles are generally adopted in the robot article grabbing operation. The limited objects, namely the objects to be grasped, are generally one or more fixed objects, the objects are simple and uniform in shape, and the grabbing of the objects can be completed without accurate positioning by modifying the grabbing hand of the objects in a fitting way; the position of the article in the position and the posture of the article to be grasped are determined as information, and the robot can complete tasks only by grasping according to the known information and the determined posture. However, the two modes can only complete the task of grabbing specific objects, the types of the objects in the home scene are various and the gestures are random, and the grabbing of various objects in the scene is difficult to achieve. Therefore, the robot gripping operation of various objects in a complex environment is very significant by researching the robot gripping method combining object identification positioning and gesture estimation.
Disclosure of Invention
The invention aims to provide a household service robot article grabbing method based on visual feedforward and visual feedback, so as to make up for the limitation of the article grabbing method, and the method relates to a global RGBD camera, a tail end RGBD camera of a mechanical arm and an arm-hand system, wherein the visual feedforward is completed by identifying and positioning the type of an article in a large scene, the visual feedback is completed by estimating the posture of the article in a short distance according to the visual feedforward, and the mechanical arm is controlled to complete the task of grabbing the article according to the posture of the article.
In order to achieve the above purpose, the technical scheme provided by the invention comprises the following steps:
step 1: performing internal parameter calibration and hand-eye calibration on the global camera and the camera at the tail end of the mechanical arm to obtain a conversion relation between a camera internal parameter matrix and a coordinate system;
step 2: acquiring information of articles in a home scene, marking the acquired results with data to obtain an article data set, and establishing an attitude estimation database
Step 3: training an article data set in a home scene through an artificial neural network article identification algorithm, and identifying articles in the home scene in a global camera color picture by using a training result to obtain the types of the articles in a scene view and the pixel coordinates of the center point of each article image in a captured color picture;
step 4: selecting an identification image to be grabbed according to task requirements, aligning pixel coordinates of the object in the image to coordinates of a depth camera coordinate system of a global camera, calculating and obtaining rough coordinates of the object under a world coordinate system according to a coordinate conversion matrix calibrated by hands and eyes, and finishing visual feedforward;
step 5: according to rough position coordinates of the object obtained by visual feedforward, controlling the tail end of the mechanical arm to move to the front upper side of the object, enabling a camera at the tail end of the mechanical arm to face the object to be grabbed, and enabling the object to be in a good visual field range of the camera at the tail end of the mechanical arm;
step 6: the camera at the tail end of the mechanical arm captures information of the object in the visual field, calculates normal characteristics of depth information, performs window-dividing matching on the normal characteristics and templates in the gesture estimation database, performs gesture estimation according to an optimal matching result to obtain the gesture of the object in a picture, calculates and obtains accurate coordinates and gesture information of the object under a world coordinate system through a transformation matrix of hand-eye calibration, and completes visual feedback;
step 7: according to the accurate coordinate and gesture information of the object obtained through visual feedback, the tail end gripper of the mechanical arm is controlled to move to the gripping point by combining with the preset geometric gripping point and gripping mode of the object, and the object gripping task is completed.
The beneficial effects that adopt above-mentioned technical scheme to produce lie in: according to the household service robot character grabbing method based on the visual feedforward and the visual feedback, the visual feedforward of a household scene and the visual feedback of objects are completed through the global camera and the camera at the tail end of the mechanical arm, so that accurate grabbing of various objects in a complicated household scene is realized. The invention relates to a method for grabbing articles, which can greatly improve the effectiveness and adaptability of the household service robot to execute tasks.
Drawings
For a clearer description of the technical solutions to which the present invention relates, the following embodiments or the drawings used in the prior art description are briefly introduced, and it is obvious that the drawings in the following description are embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a robot gripping task execution based on visual feedforward and visual feedback in an embodiment of the invention;
FIG. 2 is a schematic layout diagram of a camera and a robot arm in an embodiment of the present invention, wherein the camera is a 1-robot arm terminal camera, a 2-robot arm, and a 3-global camera;
FIG. 3 is a diagram of a matching template of a can in a gesture estimation database according to an embodiment of the present invention;
FIG. 4 is a diagram of the recognition effect of a global camera visual feed-forward object in an embodiment of the invention;
FIG. 5 is a schematic diagram of a global camera color pixel coordinate and depth three-dimensional coordinate alignment solution in an embodiment of the present invention;
FIG. 6 is a diagram showing the effect of estimating the visual feedback gesture of the camera at the tail end of the mechanical arm in the embodiment of the invention;
Detailed Description
The invention will now be described in detail with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
In this embodiment, a method for capturing a person's article in a home service based on visual feedforward and visual feedback, as shown in fig. 1, includes the following steps:
step 1: performing internal parameter calibration and hand-eye calibration on the global camera and the camera at the tail end of the mechanical arm to obtain a conversion relation of a coordinate system participated in the camera, acquiring information of objects in a home scene and establishing a gesture estimation database;
in the embodiment, firstly, the global camera and the camera at the tail end of the mechanical arm are required to be subjected to internal reference calibration, and the calibration is completed by matching with a calibration plate, wherein the calibration result is a 3×3 matrix; after the internal reference calibration is completed, the hand-eye calibration is needed. As shown in fig. 2, the camera and the mechanical arm are arranged, the global camera is in a eye-to-hand configuration, the camera at the tail end of the mechanical arm is in an eye-in-hand configuration, and the transformation matrix T of the global camera coordinate system and the mechanical arm base coordinate system can be obtained according to the hand-eye calibration result cr With the end of the mechanical armConversion matrix T of camera coordinate system and mechanical arm tail end ce 。
Step 2: acquiring information of articles in a home scene, marking the acquired results with data to obtain an article data set, and establishing an attitude estimation database;
in this embodiment, a plurality of pictures of various articles in a home scene are marked by using a data marking tool, and the marking result is stored to establish a data set of the various articles in the home scene. The three-dimensional scanning modeling is carried out on the object through the RGBD camera to obtain a grid model in the STL format of the corresponding object, a pop can template is shown in fig. 3, and the pop can template is used as template data to be stored so as to establish an attitude estimation database.
Step 3: training an article data set in a home scene through an artificial neural network article identification algorithm, and identifying articles in the home scene in a global camera color picture by using a training result to obtain the types of the articles in a scene view and the pixel coordinates of the center point of each article image in a captured color picture;
training the household scene object data set established in the step 2 through an artificial neural network object recognition algorithm to obtain a training result of the data set, recognizing objects in the household scene in the global camera color picture by taking the result as an object recognition model, displaying the recognition result in a frame selection mode, taking a center point of the frame selection as a center point of the recognized objects, and obtaining pixel coordinates (u, v) of the point in the captured color picture, wherein the recognition result is shown in fig. 4.
Step 4: selecting an identification image to be grabbed according to task requirements, aligning pixel coordinates of the object in the image to coordinates of a depth camera coordinate system of a global camera, calculating and obtaining rough coordinates of the object under a world coordinate system according to a coordinate conversion matrix calibrated by hands and eyes, and finishing visual feedforward;
according to the frame selection result of the object to be grabbed, the pixel coordinates (u, v) of the center point of the object in the picture are obtained, and according to the imaging principle, as shown in fig. 5, the following formula (1) can be obtained:
wherein u is 0 ,v 0 ,f x ,f y Is an internal reference of a color camera of a camera, u and v are any pixel points under an image coordinate system, and z c Z-axis value representing camera coordinates provided by a depth camera, a calculated (x w ,y w ,z w ) Namely the rough coordinates of the object under the camera coordinate system are obtained by a coordinate transformation matrix T cr The rough coordinates of the object under the world coordinate system can be obtained, and the visual feedforward is completed.
Step 5: according to rough position coordinates of the object obtained by visual feedforward, controlling the tail end of the mechanical arm to move to the front upper side of the object, enabling a camera at the tail end of the mechanical arm to face the object to be grabbed, and enabling the object to be in a good visual field range of the camera at the tail end of the mechanical arm;
step 6: the camera at the tail end of the mechanical arm captures information of the object in the visual field, calculates normal characteristics of depth information, performs window-dividing matching on the normal characteristics and templates in the gesture estimation database, performs gesture estimation according to an optimal matching result to obtain the gesture of the object in a picture, calculates and obtains accurate coordinates and gesture information of the object under a world coordinate system through a transformation matrix of hand-eye calibration, and completes visual feedback;
the method comprises the steps of finishing information capture of an object to be grabbed in a visual field through a camera at the tail end of a mechanical arm, calculating normal characteristics of depth information, structuring the normal characteristics into 5 directions based on a linemod algorithm, performing expansion storage, taking a cosine value of a template in a calculation storage result and an attitude estimation database as a matching object, calculating a matching degree in a window-dividing mode, performing attitude estimation according to an optimal matching degree result so as to obtain the attitude of the object to be grabbed in a depth picture, and converting the attitude into a matrix T through coordinates ce The accurate coordinates and corresponding posture information of the object in the world coordinate system can be obtained, visual feedback is completed, and the recognition result is shown in fig. 6.
Step 7: according to the accurate coordinate and gesture information of the object obtained through visual feedback, the tail end gripper of the mechanical arm is controlled to move to the gripping point by combining with the preset geometric gripping point and gripping mode of the object, and the object gripping task is completed.
Claims (4)
1. A home service robot article grabbing method based on visual feedforward and visual feedback is characterized by comprising the following steps:
step 1: performing internal parameter calibration and hand-eye calibration on the global camera and the camera at the tail end of the mechanical arm to obtain a conversion relation between a camera internal parameter matrix and a coordinate system;
step 2: acquiring information of articles in a home scene, marking the acquired results with data to obtain an article data set, and establishing an attitude estimation database
Step 3: training an article data set in a home scene through an artificial neural network article identification algorithm, and identifying articles in the home scene in a global camera color picture by using a training result to obtain the types of the articles in a scene view and the pixel coordinates of the center point of each article image in a captured color picture;
step 4: selecting an identification image to be grabbed according to task requirements, aligning pixel coordinates of the object in the image to coordinates of a depth camera coordinate system of a global camera, calculating and obtaining rough coordinates of the object under a world coordinate system according to a coordinate conversion matrix calibrated by hands and eyes, and finishing visual feedforward;
step 5: according to rough position coordinates of the object obtained by visual feedforward, controlling the tail end of the mechanical arm to move to the front upper side of the object, enabling a camera at the tail end of the mechanical arm to face the object to be grabbed, and enabling the object to be in a good visual field range of the camera at the tail end of the mechanical arm;
step 6: the camera at the tail end of the mechanical arm captures information of the object in the visual field, calculates normal characteristics of depth information, performs window-dividing matching on the normal characteristics and templates in the gesture estimation database, performs gesture estimation according to an optimal matching result to obtain the gesture of the object in a picture, calculates and obtains accurate coordinates and gesture information of the object under a world coordinate system through a transformation matrix of hand-eye calibration, and completes visual feedback;
step 7: according to the accurate coordinate and gesture information of the object obtained through visual feedback, the tail end gripper of the mechanical arm is controlled to move to the gripping point by combining with the preset geometric gripping point and gripping mode of the object, and the object gripping task is completed.
2. The robotic article gripping method according to claim 1, wherein: and 2, establishing a posture estimation database by using a three-dimensional scanning modeling mode, establishing a grid model of the objects in the home scene, and intensively storing the grid model in an STL file format to provide template calling for subsequent matching.
3. The robotic article gripping method according to claim 1, wherein: the visual feedforward method in step 3 and step 4 comprises a method for identifying various objects in a home scene through an artificial neural network algorithm and a method for roughly positioning the objects in the home scene through a series of conversion of the pixel coordinates of the central point of the frame selection.
4. The robotic article gripping method according to claim 1, wherein: according to the visual feedback method, information capturing of the objects to be grabbed in the visual field is completed through a camera at the tail end of the mechanical arm, normal characteristics of depth information are calculated, the normal characteristics are structured into 5 directions based on a linemod algorithm to be subjected to expansion storage, cosine values of templates in a calculation storage result and an attitude estimation database are used as matching objects, the matching degree is calculated in a window drawing mode, attitude estimation is performed according to the optimal matching degree result, and therefore the attitude of the objects to be grabbed in the depth image is obtained, and accurate coordinates and corresponding attitude information of the objects in a world coordinate system are obtained through a coordinate conversion matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010783580.1A CN112372641B (en) | 2020-08-06 | 2020-08-06 | Household service robot character grabbing method based on visual feedforward and visual feedback |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010783580.1A CN112372641B (en) | 2020-08-06 | 2020-08-06 | Household service robot character grabbing method based on visual feedforward and visual feedback |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112372641A CN112372641A (en) | 2021-02-19 |
CN112372641B true CN112372641B (en) | 2023-06-02 |
Family
ID=74586010
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010783580.1A Active CN112372641B (en) | 2020-08-06 | 2020-08-06 | Household service robot character grabbing method based on visual feedforward and visual feedback |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112372641B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113084808B (en) * | 2021-04-02 | 2023-09-22 | 上海智能制造功能平台有限公司 | Monocular vision-based 2D plane grabbing method for mobile mechanical arm |
CN113370223B (en) * | 2021-04-19 | 2022-09-02 | 中国人民解放***箭军工程大学 | Following type explosive-handling robot device and control method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399639A (en) * | 2018-02-12 | 2018-08-14 | 杭州蓝芯科技有限公司 | Fast automatic crawl based on deep learning and arrangement method |
CN109483554A (en) * | 2019-01-22 | 2019-03-19 | 清华大学 | Robotic Dynamic grasping means and system based on global and local vision semanteme |
CN110211180A (en) * | 2019-05-16 | 2019-09-06 | 西安理工大学 | A kind of autonomous grasping means of mechanical arm based on deep learning |
CN110744544A (en) * | 2019-10-31 | 2020-02-04 | 昆山市工研院智能制造技术有限公司 | Service robot vision grabbing method and service robot |
CN111055281A (en) * | 2019-12-19 | 2020-04-24 | 杭州电子科技大学 | ROS-based autonomous mobile grabbing system and method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6126067B2 (en) * | 2014-11-28 | 2017-05-10 | ファナック株式会社 | Collaborative system with machine tool and robot |
JP6546618B2 (en) * | 2017-05-31 | 2019-07-17 | 株式会社Preferred Networks | Learning apparatus, learning method, learning model, detection apparatus and gripping system |
-
2020
- 2020-08-06 CN CN202010783580.1A patent/CN112372641B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399639A (en) * | 2018-02-12 | 2018-08-14 | 杭州蓝芯科技有限公司 | Fast automatic crawl based on deep learning and arrangement method |
CN109483554A (en) * | 2019-01-22 | 2019-03-19 | 清华大学 | Robotic Dynamic grasping means and system based on global and local vision semanteme |
CN110211180A (en) * | 2019-05-16 | 2019-09-06 | 西安理工大学 | A kind of autonomous grasping means of mechanical arm based on deep learning |
CN110744544A (en) * | 2019-10-31 | 2020-02-04 | 昆山市工研院智能制造技术有限公司 | Service robot vision grabbing method and service robot |
CN111055281A (en) * | 2019-12-19 | 2020-04-24 | 杭州电子科技大学 | ROS-based autonomous mobile grabbing system and method |
Also Published As
Publication number | Publication date |
---|---|
CN112372641A (en) | 2021-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106041937B (en) | A kind of control method of the manipulator crawl control system based on binocular stereo vision | |
CN106737665B (en) | Based on binocular vision and the matched mechanical arm control system of SIFT feature and implementation method | |
CN110281231B (en) | Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing | |
CN112372641B (en) | Household service robot character grabbing method based on visual feedforward and visual feedback | |
CN113379849B (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
CN107662195A (en) | A kind of mechanical hand principal and subordinate isomery remote operating control system and control method with telepresenc | |
CN106485746A (en) | Visual servo mechanical hand based on image no demarcation and its control method | |
CN112906797A (en) | Plane grabbing detection method based on computer vision and deep learning | |
CN110605711B (en) | Method, device and system for controlling cooperative robot to grab object | |
CN112509063A (en) | Mechanical arm grabbing system and method based on edge feature matching | |
CN111923053A (en) | Industrial robot object grabbing teaching system and method based on depth vision | |
CN110909644A (en) | Method and system for adjusting grabbing posture of mechanical arm end effector based on reinforcement learning | |
CN114952809A (en) | Workpiece identification and pose detection method and system and grabbing control method of mechanical arm | |
CN115213896A (en) | Object grabbing method, system and equipment based on mechanical arm and storage medium | |
CN109146939A (en) | A kind of generation method and system of workpiece grabbing template | |
CN108748149A (en) | Based on deep learning without calibration mechanical arm grasping means under a kind of complex environment | |
CN114851201A (en) | Mechanical arm six-degree-of-freedom vision closed-loop grabbing method based on TSDF three-dimensional reconstruction | |
KR101926351B1 (en) | Robot apparatus for simulating artwork | |
CN114463244A (en) | Vision robot grabbing system and control method thereof | |
CN109087343A (en) | A kind of generation method and system of workpiece grabbing template | |
Karuppiah et al. | Automation of a wheelchair mounted robotic arm using computer vision interface | |
CN115861780B (en) | Robot arm detection grabbing method based on YOLO-GGCNN | |
CN114187312A (en) | Target object grabbing method, device, system, storage medium and equipment | |
CN117021099A (en) | Human-computer interaction method oriented to any object and based on deep learning and image processing | |
CN115194774A (en) | Binocular vision-based control method for double-mechanical-arm gripping system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |