CN112109074A - Robot target image capturing method - Google Patents

Robot target image capturing method Download PDF

Info

Publication number
CN112109074A
CN112109074A CN202010106418.6A CN202010106418A CN112109074A CN 112109074 A CN112109074 A CN 112109074A CN 202010106418 A CN202010106418 A CN 202010106418A CN 112109074 A CN112109074 A CN 112109074A
Authority
CN
China
Prior art keywords
robot
target
target object
image
manipulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010106418.6A
Other languages
Chinese (zh)
Inventor
庄永军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sanbao Innovation Intelligence Co ltd
Original Assignee
Shenzhen Sanbao Innovation Intelligence Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sanbao Innovation Intelligence Co ltd filed Critical Shenzhen Sanbao Innovation Intelligence Co ltd
Priority to CN202010106418.6A priority Critical patent/CN112109074A/en
Publication of CN112109074A publication Critical patent/CN112109074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

A robot target image capturing method includes that a robot rotates to drive a 3D camera at the head of the robot to recognize a target object and shoot an image; the robot moves to the position of the optimal working space when the manipulator of the robot grabs the target object; the 3D camera shoots a target object from different directions and transmits an image signal to the computer, and the computer performs 3D modeling on the target object by using the image information; planning the grabbing pose of the manipulator end effector according to inverse kinematics, and generating a target grabbing path with no collision during the motion of the manipulator; and calculating the rotation quantity of each joint motor by using the inverse kinematics model of the robot arm to generate an expected track, and enabling the end effector to generate corresponding action coordination to realize humanoid stable grabbing of the target object. According to the invention, the image is shot by the 3D camera, the target object is positioned by a visual feedback method, and the expected track of the captured target object is calculated according to inverse kinematics, so that the imitation hand can better perform man-machine interaction and can work in a complex environment more flexibly.

Description

Robot target image capturing method
Technical Field
The invention relates to the technical field of robot control, in particular to a robot target image capturing method.
Background
Currently, when a robot is used to grasp an object, it is common to take an image of a product using a camera device and then perform image processing on the taken image to acquire object information such as the size of the object. The object information is then sent to the control unit of the robot so that the robot can control the specific operation of the robot arm according to the object information, such as the size of the object and the position of the object as shown in the photograph.
The traditional robot has the characteristics of high rigidity, high strength, high precision and high speed, and is widely applied to the industrial field, however, when a great deal of scientific research and technical personnel make great efforts to expand the rigid robot from the application of an industrial production line to other fields (such as housekeeping service, old and disabled helping, agricultural automation, medical rehabilitation and the like), the robot which depends heavily on a structural environment and an accurate mathematical model is found to be difficult to describe by the accurate mathematical model in the non-structural complex environment. When a complex and changeable object carries out interactive operation, the characteristics of high rigidity, high strength and high precision of the robot become defects which cause the robot to be insufficient for such tasks. In this case, a robot research is gradually emerging.
For the grasping robot, it is difficult to design and manufacture fingers like a human, it is easy to malfunction, and it is complicated to control tens of joints of the mechanical fingers using the controller. People in the new age need more sophisticated imitation hands. The imitation hand can better perform human-computer interaction and can work in a complex environment more flexibly.
Disclosure of Invention
Objects of the invention
In order to solve the technical problems in the background art, the invention provides a robot target image capturing method, which comprises the steps of shooting images through a 3D camera, positioning a target object through a visual feedback method, and calculating an expected track of the captured target object according to inverse kinematics, so that a simulated hand can better perform man-machine interaction and can work in a complex environment more flexibly.
(II) technical scheme
In order to solve the above problems, the present invention provides a robot target image capturing method, including:
the robot rotates to drive the head 3D camera to recognize a target object and shoot an image;
the robot moves to the position of the optimal working space when the manipulator of the robot grabs the target object;
the 3D camera shoots a target object from different directions and transmits an image signal to the computer, and the computer performs 3D modeling on the target object by using the image information;
planning the grabbing pose of the manipulator end effector according to inverse kinematics, and generating a target grabbing path with no collision during the motion of the manipulator;
and calculating the rotation quantity of each joint motor by using the inverse kinematics model of the robot arm to generate an expected track, and enabling the end effector to generate corresponding action coordination to realize humanoid stable grabbing of the target object.
Preferably, the space position information of the target object in the 3D camera coordinate system is converted into the representation in the mechanical arm base coordinate system through hand-eye calibration for positioning the target object.
Preferably, when the image shot by the 3D camera is preprocessed, the H color channel is subjected to color segmentation, contour extraction, morphology removal, partial interference, Hough transformation circle detection, circle rate, area size, circumscribed rectangle area, area size/circumscribed rectangle area ratio and the like on the basis of the HSV model, and the threshold is set for final identification.
Preferably, a smooth transition track is fitted by utilizing a cubic B spline curve according to the target pose information.
Preferably, the cubic B-spline curve fitting is mainly based on the de-boolean derivation formula to finally obtain the following curve equation: p (x) ═ p1 × N1+ p2 × N2+ p3 × N3+ p4 × N4; wherein: where p1 … pi is the control vertex.
Preferably, the step of obtaining the desired trajectory of the grasped object is as follows:
s1, solving the tail end discrete points by inverse kinematics to obtain joint angles corresponding to the discrete points;
s2, fitting each joint angle by utilizing a cubic spline curve to obtain an expected track of a grabbed target of each joint, tracking the expected track, and mainly adopting an active disturbance rejection control technology because the PID control effect adopted in the early stage is not ideal.
Preferably, the Active Disturbance Rejection Controller (ADRC) is composed of a Tracker (TD), a nonlinear state error feedback control law (NLSEF) and an Extended State Observer (ESO), and the ADRC organically combined can well solve the problem of tracking control of the controlled object.
Preferably, the target is located using visual feedback.
Preferably, the visual feedback method comprises the following steps:
s1, controlling coarse movement to enable the manipulator to move to the range of the target object, and enabling the vision system to see the tail end of the manipulator and the target simultaneously;
s2, attaching a label easy to identify and position to the mechanical speaking purpose, and detecting the position error between the label and the target center in real time;
and S3, obtaining the micro-rotation quantity of each joint according to the error as the expected tracking quantity of the ADRC, and finally realizing accurate control to the target position.
When the robot system is used in an actual working environment, the robot system rotates to drive the 3D camera at the head of the robot system to shoot a scene to search for a target object, after the target object is identified, the robot system moves to the position of the optimal working space when a manipulator grabs the target object, then the 3D camera shoots the target object from different directions and transmits image signals to the computer, then the computer carries out 3D modeling on the target object by utilizing image information, plans the grabbing pose of the end effector and generates a target grabbing path without collision in the motion of the manipulator, the corresponding rotation quantity of each joint motor is calculated by utilizing an inverse kinematics model of the robot arm, finally the controller generates a control instruction to control each joint motor of the manipulator to track an expected track, and the end effector generates corresponding action coordination to realize the stable grabbing of the target object by simulating a human. The imitation hand can better perform man-machine interaction and can work in a complex environment more flexibly.
According to the invention, the target object is positioned through hand-eye calibration, so that the positioning is more convenient and accurate; when the image is preprocessed, the above reference is adopted, so that the image processing is more convenient and faster; fitting a smooth transitional track by utilizing a cubic B-spline curve, ensuring the stability of the movement of the manipulator, and avoiding the increase of the abrasion of mechanical equipment due to buffeting caused by sudden change; the coordinates of the curve fitting points are calculated through a curve equation, so that the result is more accurate, and the stability of the movement of the manipulator is ensured; the use of the active disturbance rejection controller enables the control effect to be better; the target object can be accurately positioned by adopting a visual feedback method.
Drawings
Fig. 1 is a schematic flow chart of a robot target image capture method according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
As shown in fig. 1, in the method for capturing a target image of a robot provided by the present invention, the robot rotates to drive a 3D camera at a head thereof to identify a target object and capture an image;
the robot moves to the position of the optimal working space when the manipulator of the robot grabs the target object;
the 3D camera shoots a target object from different directions and transmits an image signal to the computer, and the computer performs 3D modeling on the target object by using the image information;
planning the grabbing pose of the manipulator end effector according to inverse kinematics, and generating a target grabbing path with no collision during the motion of the manipulator;
and calculating the rotation quantity of each joint motor by using the inverse kinematics model of the robot arm to generate an expected track, and enabling the end effector to generate corresponding action coordination to realize humanoid stable grabbing of the target object.
When the robot system is used in an actual working environment, the robot system rotates to drive the 3D camera at the head of the robot system to shoot a scene to search for a target object, after the target object is identified, the robot system moves to the position of the optimal working space when a manipulator grabs the target object, then the 3D camera shoots the target object from different directions and transmits image signals to the computer, then the computer carries out 3D modeling on the target object by utilizing image information, plans the grabbing pose of the end effector and generates a target grabbing path without collision in the motion of the manipulator, the corresponding rotation quantity of each joint motor is calculated by utilizing an inverse kinematics model of the robot arm, finally the controller generates a control instruction to control each joint motor of the manipulator to track an expected track, and the end effector generates corresponding action coordination to realize the stable grabbing of the target object by simulating a human. The imitation hand can better perform man-machine interaction and can work in a complex environment more flexibly.
In an alternative embodiment, the spatial position information of the target object in the 3D camera coordinate system is converted to the representation in the mechanical arm base coordinate system through hand-eye calibration for positioning the target object.
It should be noted that, the target object is positioned by the hand-eye calibration, so that the positioning is more convenient and accurate.
In an optional embodiment, when an image shot by the 3D camera is preprocessed, color segmentation, contour extraction, morphology removal, partial interference, Hough transform circle detection, circle rate, area size, circumscribed rectangle area, area size/circumscribed rectangle area ratio and the like are carried out on an H color channel on the basis of an HSV model, and therefore a threshold value is set for final identification.
It should be noted that, when the image is preprocessed, the above reference is adopted, so that the image processing is more convenient.
In an alternative embodiment, a smooth transition track is fitted by utilizing a cubic B-spline curve according to the target pose information.
It should be noted that a smooth transition track is fitted by using a cubic B-spline curve, so that the motion stability of the manipulator is ensured, and the phenomenon that buffeting generated by sudden change increases the abrasion of mechanical equipment is avoided.
In an alternative embodiment, the cubic B-spline curve fitting ultimately yields the following curve equation, primarily according to the DeBoolean derivation: p (x) ═ p1 × N1+ p2 × N2+ p3 × N3+ p4 × N4; wherein: where p1 … pi is the control vertex.
It should be noted that, the curve fitting point coordinates are calculated through a curve equation, the result is more accurate, and the stability of the movement of the manipulator is ensured.
In an alternative embodiment, the step of obtaining the desired trajectory of the grasping object includes:
s1, solving the tail end discrete points by inverse kinematics to obtain joint angles corresponding to the discrete points;
s2, fitting each joint angle by utilizing a cubic spline curve to obtain an expected track of a grabbed target of each joint, tracking the expected track, and mainly adopting an active disturbance rejection control technology because the PID control effect adopted in the early stage is not ideal.
It should be noted that, through the above steps, a desired trajectory of the grasping object can be obtained.
In an alternative embodiment, the Active Disturbance Rejection Controller (ADRC) consists of three parts, namely a Tracker (TD), a nonlinear state error feedback control law (NLSEF) and an Extended State Observer (ESO), and the ADRC organically combined can well solve the problem of tracking control of a controlled object.
It should be noted that the use of the active disturbance rejection controller makes the control effect better.
In an alternative embodiment, visual feedback is used to locate the target.
It should be noted that the target object can be accurately positioned by using a visual feedback method.
In an alternative embodiment, the visual feedback method comprises the steps of:
s1, controlling coarse movement to enable the manipulator to move to the range of the target object, and enabling the vision system to see the tail end of the manipulator and the target simultaneously;
s2, attaching a label easy to identify and position to the mechanical speaking purpose, and detecting the position error between the label and the target center in real time;
and S3, obtaining the micro-rotation quantity of each joint according to the error as the expected tracking quantity of the ADRC, and finally realizing accurate control to the target position.
It should be noted that the precise position of the target object can be obtained through the above steps.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (9)

1. A robot target image capture method is characterized by comprising the following steps:
the robot rotates to drive the head 3D camera to recognize a target object and shoot an image;
the robot moves to the position of the optimal working space when the manipulator of the robot grabs the target object;
the 3D camera shoots a target object from different directions and transmits an image signal to the computer, and the computer performs 3D modeling on the target object by using the image information;
planning the grabbing pose of the manipulator end effector according to inverse kinematics, and generating a target grabbing path with no collision during the motion of the manipulator;
and calculating the rotation quantity of each joint motor by using the inverse kinematics model of the robot arm to generate an expected track, and enabling the end effector to generate corresponding action coordination to realize humanoid stable grabbing of the target object.
2. The method as claimed in claim 1, wherein the spatial position information of the target object in the 3D camera coordinate system is converted to the representation in the robot arm-based coordinate system by hand-eye calibration for positioning the target object.
3. The robot target image capture method according to claim 2, wherein when the image captured by the 3D camera is preprocessed, color segmentation, contour extraction, morphology removal, partial interference, hough transform circle detection, circle rate, area size, circumscribed rectangle area, area size/circumscribed rectangle area ratio, and the like are performed on the H color channel based on the HSV model, so that the final recognition is performed by setting a threshold.
4. The method for grabbing the robot target image according to claim 1, wherein a smooth transition track is fitted by using a cubic B-spline curve according to target pose information.
5. The method of claim 4, wherein the cubic B-spline curve fitting is mainly based on the de-Boolean derivation formula to obtain the following curve equation: p (x) ═ p1 × N1+ p2 × N2+ p3 × N3+ p4 × N4; wherein: where p1 … pi is the control vertex.
6. The robot target image capture method of claim 1, wherein the step of obtaining the desired trajectory of the captured target object comprises:
s1, solving the tail end discrete points by inverse kinematics to obtain joint angles corresponding to the discrete points;
s2, fitting each joint angle by utilizing a cubic spline curve to obtain an expected track of a grabbed target of each joint, tracking the expected track, and mainly adopting an active disturbance rejection control technology because the PID control effect adopted in the early stage is not ideal.
7. The method for capturing the target image of the robot as claimed in claim 1, wherein the Active Disturbance Rejection Controller (ADRC) is composed of a Tracker (TD), a nonlinear state error feedback control law (NLSEF) and an Extended State Observer (ESO), and the ADRC organically combined with the tracking control of the controlled object is well solved.
8. A robot target image capture method as claimed in claim 1, wherein a visual feedback method is used to locate the target.
9. The method for capturing the image of the target of the robot as claimed in claim 8, wherein the visual feedback method comprises the steps of:
s1, controlling coarse movement to enable the manipulator to move to the range of the target object, and enabling the vision system to see the tail end of the manipulator and the target simultaneously;
s2, attaching a label easy to identify and position to the mechanical speaking purpose, and detecting the position error between the label and the target center in real time;
and S3, obtaining the micro-rotation quantity of each joint according to the error as the expected tracking quantity of the ADRC, and finally realizing accurate control to the target position.
CN202010106418.6A 2020-02-21 2020-02-21 Robot target image capturing method Pending CN112109074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010106418.6A CN112109074A (en) 2020-02-21 2020-02-21 Robot target image capturing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010106418.6A CN112109074A (en) 2020-02-21 2020-02-21 Robot target image capturing method

Publications (1)

Publication Number Publication Date
CN112109074A true CN112109074A (en) 2020-12-22

Family

ID=73798751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010106418.6A Pending CN112109074A (en) 2020-02-21 2020-02-21 Robot target image capturing method

Country Status (1)

Country Link
CN (1) CN112109074A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112894800A (en) * 2020-12-24 2021-06-04 苏州迈维视电子技术有限公司 Method for workpiece grabbing and blanking guide
CN113199454A (en) * 2021-06-22 2021-08-03 北京航空航天大学 Wheeled mobile intelligent logistics operation robot system
CN113246140A (en) * 2021-06-22 2021-08-13 沈阳风驰软件股份有限公司 Multi-model workpiece disordered grabbing method and device based on camera measurement
CN113894050A (en) * 2021-09-14 2022-01-07 深圳玩智商科技有限公司 Logistics piece sorting method, sorting equipment and storage medium
CN113954075A (en) * 2021-11-10 2022-01-21 佛山市南海区广工大数控装备协同创新研究院 Moving object tracking and grabbing method and device based on active movement of robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1133952A (en) * 1997-07-18 1999-02-09 Yaskawa Electric Corp Method for controlling robot, and method for correcting position and attitude of robot and held object
CN101402199A (en) * 2008-10-20 2009-04-08 北京理工大学 Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
CN104842362A (en) * 2015-06-18 2015-08-19 厦门理工学院 Method for grabbing material bag by robot and robot grabbing device
CN107414832A (en) * 2017-08-08 2017-12-01 华南理工大学 A kind of mobile mechanical arm crawl control system and method based on machine vision
CN107571260A (en) * 2017-10-25 2018-01-12 南京阿凡达机器人科技有限公司 The method and apparatus that control machine people captures object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1133952A (en) * 1997-07-18 1999-02-09 Yaskawa Electric Corp Method for controlling robot, and method for correcting position and attitude of robot and held object
CN101402199A (en) * 2008-10-20 2009-04-08 北京理工大学 Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
CN104842362A (en) * 2015-06-18 2015-08-19 厦门理工学院 Method for grabbing material bag by robot and robot grabbing device
CN107414832A (en) * 2017-08-08 2017-12-01 华南理工大学 A kind of mobile mechanical arm crawl control system and method based on machine vision
CN107571260A (en) * 2017-10-25 2018-01-12 南京阿凡达机器人科技有限公司 The method and apparatus that control machine people captures object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王忆勤等: "《中医面诊与计算机辅助诊断》", 30 November 2010, 上海科学技术出版社, pages: 71 - 75 *
过志强等: "视觉引导的机器人关节空间动态轨迹规划", 自动化仪表, vol. 36, no. 3, 31 March 2015 (2015-03-31), pages 77 - 80 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112894800A (en) * 2020-12-24 2021-06-04 苏州迈维视电子技术有限公司 Method for workpiece grabbing and blanking guide
CN113199454A (en) * 2021-06-22 2021-08-03 北京航空航天大学 Wheeled mobile intelligent logistics operation robot system
CN113246140A (en) * 2021-06-22 2021-08-13 沈阳风驰软件股份有限公司 Multi-model workpiece disordered grabbing method and device based on camera measurement
CN113246140B (en) * 2021-06-22 2021-10-15 沈阳风驰软件股份有限公司 Multi-model workpiece disordered grabbing method and device based on camera measurement
CN113894050A (en) * 2021-09-14 2022-01-07 深圳玩智商科技有限公司 Logistics piece sorting method, sorting equipment and storage medium
CN113954075A (en) * 2021-11-10 2022-01-21 佛山市南海区广工大数控装备协同创新研究院 Moving object tracking and grabbing method and device based on active movement of robot

Similar Documents

Publication Publication Date Title
CN112109074A (en) Robot target image capturing method
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
CN107160364B (en) Industrial robot teaching system and method based on machine vision
CN114080583B (en) Visual teaching and repetitive movement manipulation system
CN109571487B (en) Robot demonstration learning method based on vision
Do et al. Imitation of human motion on a humanoid robot using non-linear optimization
CN105252532A (en) Method of cooperative flexible attitude control for motion capture robot
CN104325268A (en) Industrial robot three-dimensional space independent assembly method based on intelligent learning
CN112454333B (en) Robot teaching system and method based on image segmentation and surface electromyogram signals
CN113829343B (en) Real-time multitasking and multi-man-machine interaction system based on environment perception
Schröder et al. Real-time hand tracking with a color glove for the actuation of anthropomorphic robot hands
Skoglund et al. Programming by demonstration of pick-and-place tasks for industrial manipulators using task primitives
CN114770461A (en) Monocular vision-based mobile robot and automatic grabbing method thereof
CN113119073A (en) Mechanical arm system based on computer vision and machine learning and oriented to 3C assembly scene
Gao et al. Kinect-based motion recognition tracking robotic arm platform
CN115723152B (en) Intelligent nursing robot
CN111702787A (en) Man-machine cooperation control system and control method
CN116423520A (en) Mechanical arm track planning method based on vision and dynamic motion primitives
Zhou et al. Visual servo control system of 2-DOF parallel robot
CN116206189A (en) Curved surface graphic identification code and identification method thereof
Jayasurya et al. Gesture controlled AI-robot using Kinect
Bai et al. Kinect-based hand tracking for first-person-perspective robotic arm teleoperation
Infantino et al. Visual control of a robotic hand
Li et al. Design of Tai-Chi Push-Hands Robot Control System and Construction of Visual Platform
Wang et al. Object Grabbing of Robotic Arm Based on OpenMV Module Positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination