CN109129474B - Multi-mode fusion-based active manipulator grabbing device and method - Google Patents

Multi-mode fusion-based active manipulator grabbing device and method Download PDF

Info

Publication number
CN109129474B
CN109129474B CN201810911069.8A CN201810911069A CN109129474B CN 109129474 B CN109129474 B CN 109129474B CN 201810911069 A CN201810911069 A CN 201810911069A CN 109129474 B CN109129474 B CN 109129474B
Authority
CN
China
Prior art keywords
grabbed
grabbing
manipulator
information
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810911069.8A
Other languages
Chinese (zh)
Other versions
CN109129474A (en
Inventor
王伟明
马进
薛腾
韩鸣朔
刘文海
潘震宇
邵全全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201810911069.8A priority Critical patent/CN109129474B/en
Publication of CN109129474A publication Critical patent/CN109129474A/en
Application granted granted Critical
Publication of CN109129474B publication Critical patent/CN109129474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a manipulator active grabbing device and method based on multi-mode fusion, wherein the manipulator active grabbing device based on multi-mode fusion comprises a base (1), a mechanical arm (2), a laser radar (3), a binocular vision system (4) and a manipulator (5), wherein one end of the mechanical arm (2) and the laser radar (3) are respectively and fixedly installed on the base (1), and the binocular vision system (4) and the manipulator (5) are respectively and fixedly installed at the other end of the mechanical arm; the manipulator active grabbing method based on multi-modal fusion comprises the following steps: step 1: sensing an object to be grabbed to obtain sensing information; step 2: positioning an object to be grabbed according to the perception information to obtain positioning information; and step 3: and grabbing the object to be grabbed according to the positioning information. The invention fully considers the complex environment of space operation, effectively improves the capability of grabbing moving objects and has wide application prospect.

Description

Multi-mode fusion-based active manipulator grabbing device and method
Technical Field
The invention relates to the technical field of positioning and grabbing of space robots, in particular to a manipulator active grabbing device and method based on multi-mode fusion, and particularly relates to a robot positioning and active grabbing technology in a microgravity environment integrating binocular vision, laser radar and touch perception of a CMOS camera.
Background
At present, the aerospace field of main countries in the world accelerates development, and life science experiments and space operations for exploring space are increasingly developed. Traditional space activity development depends on equipment preset instructions, direct operation of space station workers or remote operation of ground workers, and lacks of automatic real-time interaction and learning processes with the environment, so that complex operation tasks such as moving object grabbing in a microgravity environment are difficult to realize. The related research of automatic grabbing operation of the moving object under the existing microgravity environment mainly focuses on the combination of touch perception and a passive compliant mechanism to overcome the impact force in the grabbing process of the moving object to improve the grabbing success rate and reliability, and the research on the comprehensive utilization of multi-mode information fusion such as touch, vision and the like to realize the active grabbing operation of the manipulator is less, and the difficulty lies in the disturbance of the severe space environment to the sensor, the prediction of the target operation track based on inaccurate sensing information and the like, so that the comprehensive utilization of the correlation and complementarity among multi-mode sensor information has important significance for improving the grabbing efficiency and robustness.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a manipulator active grabbing device and method based on multi-mode fusion.
According to one aspect of the invention, the manipulator active grabbing device based on multi-mode fusion comprises a base, a mechanical arm, a laser radar, a binocular vision system and a manipulator, wherein the base is provided with a base frame; one end of the mechanical arm and the laser radar are respectively and tightly mounted on the base, and the binocular vision system and the mechanical arm are respectively and tightly mounted at the other end of the mechanical arm.
Preferably, a deep learning image processor chip for multi-modal information fusion, mechanical arm motion planning and grabbing control tasks is packaged in the base.
Preferably, the binocular vision system is symmetrically installed at the other end of the mechanical arm by taking the mechanical arm as an axis.
Preferably, a touch sensor for feeding back a grabbing state in real time, predicting the pose of an object and controlling the clamping force is installed inside the manipulator.
According to another aspect of the invention, a manipulator active grabbing method based on multi-mode fusion is provided, which comprises the following steps:
step 1: sensing an object to be grabbed to obtain sensing information;
step 2: positioning an object to be grabbed according to the perception information to obtain positioning information;
and step 3: and grabbing the object to be grabbed according to the positioning information.
Preferably, the perception information includes radar images and visual images, and the step 1 includes the steps of:
step 1.1: acquiring a radar image of an object to be grabbed through a laser radar;
step 1.2: and acquiring a visual image of the object to be grabbed through a binocular vision system.
Preferably, the step 2 comprises the steps of:
step 2.1: information fusion is carried out on the radar image and the visual image, and state information of an object to be grabbed is obtained;
step 2.2: predicting the operation posture and/or position information of the object to be grabbed according to the state information of the object to be grabbed obtained after the radar image and the visual image are fused;
step 2.3: judging whether the object to be grabbed enters a grabbing range or not according to the predicted operation posture and/or position information of the object to be grabbed: if the object enters the grabbing range, the operation posture and/or the position information of the object to be grabbed is predicted to be used as the positioning information, and the step 3 is carried out continuously; if the capture range is not entered, the step 2.1 is returned to and the execution is continued.
Preferably, the step 3 comprises the steps of:
step 3.1: according to the positioning information of the object to be grabbed entering the grabbing range, the grabbing posture of the manipulator is adjusted to execute grabbing operation;
step 3.2: sensing tactile information through a tactile sensor, and judging whether grabbing is successful: if the grabbing is successful, ending the process; if the grabbing fails, the method returns to the step 3.1 to continue the execution.
Preferably, the binocular vision system is used for identifying the type of the object to be grabbed and judging the spatial position relationship between the object to be grabbed and the manipulator.
Preferably, the laser radar is used for identifying the outline of the object to be grabbed and marking the object to be grabbed from the visual image.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention fully considers the complex environment of space operation, integrates binocular vision, laser radar and touch perception, and effectively improves the capability of the robot in positioning and actively grabbing moving objects in the microgravity environment.
2. According to the invention, the information acquired by the laser radar is utilized to mark the moving object to be grabbed from the image acquired by the binocular vision system, so that the defect that the traditional vision method is easily interfered by strong light is avoided, the identification accuracy of the object to be grabbed is improved, and the image identification difficulty of the computer vision system is reduced.
3. The invention adopts RNN-L STM algorithm to perform multi-mode information fusion on the binocular vision system and the image acquired by the laser radar, and solves the problem of incompleteness of single-mode environment perception information.
4. The invention predicts the trajectory of the object to be grabbed by adopting a space-time relation reasoning algorithm based on multi-mode fusion information, judges the posture of the object to be grabbed and the space relative position between the object to be grabbed and the manipulator in real time, and improves the probability of successful grabbing of the manipulator.
5. The invention adopts the touch sensor to feed back the position and posture information of the object in real time, controls and optimizes the grabbing force in real time and improves the grabbing success rate.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a general structural schematic diagram of the manipulator active gripping device based on multi-mode fusion.
Fig. 2 is a schematic view of the position relationship among the binocular vision system, the mechanical arm and the manipulator in fig. 1.
Fig. 3 is a flowchart of the manipulator active grabbing method based on multi-modal fusion.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Aiming at the problem that a binocular vision system is difficult to accurately acquire information of a moving object to be grabbed due to environmental factors such as severe space illumination, an electromagnetic field and the like, a laser radar is introduced to monitor surrounding objects in a microgravity environment in real time, information fusion is carried out on a radar image and a visual image through a recurrent neural network-long-time memory network algorithm, namely RNN-L STM algorithm, the binocular vision system is corrected according to the information fusion, accurate state information of the object to be grabbed is acquired, a time-space relation reasoning algorithm based on a deep learning theory is adopted to predict the running track of the object to be grabbed, finally a manipulator with a touch sensor at the tail end is used for executing grabbing operation, and the grabbing success rate is improved.
According to one aspect of the invention, a manipulator active gripping device based on multi-mode fusion is provided, as shown in fig. 1, and comprises a base 1, a mechanical arm 2, a laser radar 3, a binocular vision system 4 and a manipulator 5; wherein, the one end of arm 2, laser radar 3 are respectively the fastening installation on base 1, binocular vision system 4, manipulator 5 are respectively the fastening installation at the other end of arm. The base 1 is internally packaged with a deep learning image processor chip for multi-mode information fusion, mechanical arm motion planning and grabbing control tasks. As shown in fig. 2, the binocular vision system 4 is symmetrically installed at the other end of the robot arm 2 with the robot arm 2 as an axis. And a touch sensor for feeding back the grabbing state in real time, predicting the pose of an object and controlling the clamping force is arranged in the manipulator 5. The binocular vision system 4 is used for identifying the type of the object to be grabbed and judging the spatial position relationship between the object to be grabbed and the manipulator 5. The laser radar 3 is used for identifying the outline of the object to be grabbed and marking the object to be grabbed from the visual image.
According to another aspect of the present invention, there is provided a multi-modal fusion-based manipulator active gripping method, in particular a multi-modal fusion-based manipulator active gripping method using the multi-modal fusion-based manipulator active gripping apparatus, as shown in fig. 3, including the following steps:
step 1: sensing an object to be grabbed to obtain sensing information;
step 2: positioning an object to be grabbed according to the perception information to obtain positioning information;
and step 3: and grabbing the object to be grabbed according to the positioning information.
Wherein, the perception information comprises radar images and visual images, and the step 1 comprises the following steps:
step 1.1: acquiring a radar image of an object to be grabbed through a laser radar 3;
step 1.2: and acquiring a visual image of the object to be grabbed through the binocular vision system 4.
The step 2 comprises the following steps:
step 2.1, performing information fusion on the radar image and the visual image through an RNN-L STM algorithm to obtain state information of an object to be grabbed;
step 2.2: predicting the operation posture and/or position information of the object to be grabbed through a temporal-spatial relationship reasoning algorithm according to the state information of the object to be grabbed obtained after the radar image and the visual image are fused;
step 2.3: judging whether the object to be grabbed enters a grabbing range or not according to the predicted operation posture and/or position information of the object to be grabbed: if the object enters the grabbing range, the operation posture and/or the position information of the object to be grabbed is predicted to be used as the positioning information, and the step 3 is carried out continuously; if the capture range is not entered, the step 2.1 is returned to and the execution is continued.
The step 3 comprises the following steps:
step 3.1: according to the positioning information of the object to be grabbed entering the grabbing range, the grabbing posture of the manipulator 5 is adjusted to execute grabbing operation;
step 3.2: sensing tactile information through a tactile sensor, and judging whether grabbing is successful: if the grabbing is successful, ending the process; if the grabbing fails, the method returns to the step 3.1 to continue the execution.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (6)

1. A manipulator active grabbing method based on multi-mode fusion is characterized by comprising the following steps:
step 1: sensing an object to be grabbed to obtain sensing information;
step 2: positioning an object to be grabbed according to the perception information to obtain positioning information;
and step 3: grabbing an object to be grabbed according to the positioning information;
the perception information comprises radar images and visual images, and the step 1 comprises the following steps:
step 1.1: acquiring a radar image of an object to be grabbed through a laser radar (3);
step 1.2: a visual image of an object to be grabbed is obtained through a binocular vision system (4);
the step 2 comprises the following steps:
step 2.1: information fusion is carried out on the radar image and the visual image, and state information of an object to be grabbed is obtained;
step 2.2: predicting the operation posture and/or position information of the object to be grabbed according to the state information of the object to be grabbed obtained after the radar image and the visual image are fused;
step 2.3: judging whether the object to be grabbed enters a grabbing range or not according to the predicted operation posture and/or position information of the object to be grabbed: if the object enters the grabbing range, the operation posture and/or the position information of the object to be grabbed is predicted to be used as the positioning information, and the step 3 is carried out continuously; if the capture range is not entered, returning to the step 2.1 to continue execution;
the binocular vision system (4) is used for identifying the type of an object to be grabbed and judging the spatial position relation between the object to be grabbed and the manipulator (5);
the laser radar (3) is used for identifying the outline of the object to be grabbed and marking the object to be grabbed from the visual image.
2. The manipulator active grabbing method based on multi-modal fusion as claimed in claim 1, wherein the step 3 comprises the following steps:
step 3.1: according to the positioning information of the object to be grabbed entering the grabbing range, the grabbing posture of the manipulator (5) is adjusted to execute grabbing operation;
step 3.2: sensing tactile information through a tactile sensor, and judging whether grabbing is successful: if the grabbing is successful, ending the process; if the grabbing fails, the method returns to the step 3.1 to continue the execution.
3. The manipulator active grabbing device based on multi-modal fusion is characterized in that active grabbing is performed by using the manipulator active grabbing method based on multi-modal fusion of any one of claims 1-2, and comprises a base (1), a mechanical arm (2), a laser radar (3), a binocular vision system (4) and a manipulator (5); wherein, the one end of arm (2), laser radar (3) are respectively the fastening installation on base (1), binocular vision system (4), manipulator (5) are respectively the fastening installation at the other end of arm.
4. The manipulator active gripping device based on multi-modal fusion as claimed in claim 3, wherein the base (1) is internally packaged with a deep learning image processor chip for multi-modal information fusion, mechanical arm motion planning and gripping control tasks.
5. The manipulator active gripping device based on multi-modal fusion is characterized in that the binocular vision system (4) is symmetrically installed at the other end of the mechanical arm (2) by taking the mechanical arm (2) as an axis.
6. The manipulator active gripping device based on multi-modal fusion as claimed in claim 3, characterized in that a touch sensor for real-time feedback of gripping state, prediction of object pose and control of clamping force is installed inside the manipulator (5).
CN201810911069.8A 2018-08-10 2018-08-10 Multi-mode fusion-based active manipulator grabbing device and method Active CN109129474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810911069.8A CN109129474B (en) 2018-08-10 2018-08-10 Multi-mode fusion-based active manipulator grabbing device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810911069.8A CN109129474B (en) 2018-08-10 2018-08-10 Multi-mode fusion-based active manipulator grabbing device and method

Publications (2)

Publication Number Publication Date
CN109129474A CN109129474A (en) 2019-01-04
CN109129474B true CN109129474B (en) 2020-07-14

Family

ID=64792860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810911069.8A Active CN109129474B (en) 2018-08-10 2018-08-10 Multi-mode fusion-based active manipulator grabbing device and method

Country Status (1)

Country Link
CN (1) CN109129474B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993763B (en) * 2019-03-28 2021-10-08 北京理工大学 Detector positioning method and system based on image recognition and force feedback fusion
CN110666792B (en) * 2019-09-04 2022-11-25 南京富尔登科技发展有限公司 Multi-point-position cooperative control manufacturing and assembling device and method based on information fusion
CN111168685B (en) * 2020-02-17 2021-06-18 上海高仙自动化科技发展有限公司 Robot control method, robot, and readable storage medium
CN111730606B (en) * 2020-08-13 2022-03-04 深圳国信泰富科技有限公司 Grabbing action control method and system of high-intelligence robot
CN111958596B (en) * 2020-08-13 2022-03-04 深圳国信泰富科技有限公司 Action planning system and method for high-intelligence robot
CN112060085B (en) * 2020-08-24 2021-10-08 清华大学 Robot operation pose control method based on visual-touch multi-scale positioning
CN112207804A (en) * 2020-12-07 2021-01-12 国网瑞嘉(天津)智能机器人有限公司 Live working robot and multi-sensor identification and positioning method
CN112777555A (en) * 2021-03-23 2021-05-11 江苏华谊广告设备科技有限公司 Intelligent oiling device and method
CN113433941A (en) * 2021-06-29 2021-09-24 之江实验室 Multi-modal knowledge graph-based low-level robot task planning method
CN113954076B (en) * 2021-11-12 2023-01-13 哈尔滨工业大学(深圳) Robot precision assembling method based on cross-modal prediction assembling scene
CN115431279B (en) * 2022-11-07 2023-03-24 佛山科学技术学院 Mechanical arm autonomous grabbing method based on visual-touch fusion under weak rigidity characteristic condition
CN117207190B (en) * 2023-09-28 2024-05-10 重庆大学 Accurate robot system that snatchs based on vision and sense of touch fuse

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0263952A2 (en) * 1986-10-15 1988-04-20 Mercedes-Benz Ag Robot unit with moving manipulators
CN1343551A (en) * 2000-09-21 2002-04-10 上海大学 Hierarchical modular model for robot's visual sense
CN107576960A (en) * 2017-09-04 2018-01-12 苏州驾驶宝智能科技有限公司 The object detection method and system of vision radar Spatial-temporal Information Fusion
CN107838932A (en) * 2017-12-14 2018-03-27 昆山市工研院智能制造技术有限公司 A kind of robot of accompanying and attending to multi-degree-of-freemechanical mechanical arm
CN108214487A (en) * 2017-12-16 2018-06-29 广西电网有限责任公司电力科学研究院 Based on the positioning of the robot target of binocular vision and laser radar and grasping means

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0263952A2 (en) * 1986-10-15 1988-04-20 Mercedes-Benz Ag Robot unit with moving manipulators
CN1343551A (en) * 2000-09-21 2002-04-10 上海大学 Hierarchical modular model for robot's visual sense
CN107576960A (en) * 2017-09-04 2018-01-12 苏州驾驶宝智能科技有限公司 The object detection method and system of vision radar Spatial-temporal Information Fusion
CN107838932A (en) * 2017-12-14 2018-03-27 昆山市工研院智能制造技术有限公司 A kind of robot of accompanying and attending to multi-degree-of-freemechanical mechanical arm
CN108214487A (en) * 2017-12-16 2018-06-29 广西电网有限责任公司电力科学研究院 Based on the positioning of the robot target of binocular vision and laser radar and grasping means

Also Published As

Publication number Publication date
CN109129474A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109129474B (en) Multi-mode fusion-based active manipulator grabbing device and method
Zhang et al. Recurrent neural network for motion trajectory prediction in human-robot collaborative assembly
EP3414710B1 (en) Deep machine learning methods and apparatus for robotic grasping
US8244402B2 (en) Visual perception system and method for a humanoid robot
US9108321B2 (en) Motion prediction control device and method
CN114080583A (en) Visual teaching and repetitive motion manipulation system
CN110856932A (en) Interference avoidance device and robot system
Berger et al. A multi-camera system for human detection and activity recognition
EP3936286A1 (en) Robot control device, robot control method, and robot control program
WO2021147034A1 (en) System and method for controlling the robot, electronic device and computer readable medium
CN116852352A (en) Positioning method for mechanical arm of electric secondary equipment based on ArUco code
Stronger et al. Selective visual attention for object detection on a legged robot
Tosi et al. Action selection for touch-based localisation trading off information gain and execution time
US20240173857A1 (en) System and method for controlling the robot, electronic device and computer readable medium
CN116348912A (en) Method and system for object tracking in robotic vision guidance
US11766784B1 (en) Motion capture method and system of robotic arm, medium, and electronic device
Huang et al. CEASE: Collision-Evaluation-based Active Sense System for Collaborative Robotic Arms
Martínez et al. Visual predictive control of robot manipulators using a 3d tof camera
CN114083545B (en) Moving object robot grabbing method and device based on visual perception
Yu et al. Grasping perception method of space manipulator for complex scene task
US20240139962A1 (en) Iterative control of robot for target object
WO2023100282A1 (en) Data generation system, model generation system, estimation system, trained model production method, robot control system, data generation method, and data generation program
CN117428792A (en) Operating system and method for robot
CN114888851A (en) Moving object robot grabbing device based on visual perception
JP2011235379A (en) Control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant