CN111633635B - Robot feeding and discharging operation method based on visual positioning - Google Patents
Robot feeding and discharging operation method based on visual positioning Download PDFInfo
- Publication number
- CN111633635B CN111633635B CN202010623397.5A CN202010623397A CN111633635B CN 111633635 B CN111633635 B CN 111633635B CN 202010623397 A CN202010623397 A CN 202010623397A CN 111633635 B CN111633635 B CN 111633635B
- Authority
- CN
- China
- Prior art keywords
- robot
- positioning
- coordinate system
- point
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/0081—Programme-controlled manipulators with master teach-in means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1612—Programme controls characterised by the hand, wrist, grip control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- General Health & Medical Sciences (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Manipulator (AREA)
Abstract
A robot loading and unloading operation method based on visual positioning includes the steps of grabbing materials, then taking pictures, obtaining positions and angles of material positioning points and values of pose of a flange face of a robot and the like at the initial teaching through image recognition positioning, using the values as reference bases, comparing the reference bases to finely adjust the positions and angles of the materials during actual work, and finally enabling the materials to be aligned with the positions and angles of positioning tools on a workbench completely, so that the materials can be placed on the workbench accurately. The method has the advantages of reducing tooling and labor cost, solving the problem that the workpiece cannot be visually positioned in the incoming material area, being capable of positioning with high precision, improving production rhythm and the like, and is particularly suitable for material grabbing tasks in the automobile and 3C industries.
Description
Technical Field
The invention relates to the field of robot vision guiding and positioning, in particular to a robot feeding and discharging operation method based on vision positioning.
Background
The application of robot vision guided positioning in material grasping and placement is increasingly important. In the high-precision feeding and discharging operation of the existing robot, materials need to be placed at a positioning tool on a workbench, and the placing angle of the materials needs to be matched with the positioning tool on the workbench. The positioning tool can limit and position the material from the periphery of the material, and can also position the material from the inside of the material through positioning holes, pins, grooves and the like. No matter which kind, all have the location center that corresponds with the location frock on the material, both positions align to the material is around this location center put the angle and is matchd with location frock appearance structure, can directly place the material downwards on the workstation.
To achieve the purpose, two common methods are adopted, one method is to accurately position the placing angle of the material by using a pre-positioning tool which is completely the same as the placing environment on the workbench in advance, and then the robot picks the material and then loads and unloads the material according to a fixed track. The other is that the materials are randomly placed on a material table, and after the camera takes a picture and positions the materials each time, the robot performs grabbing operation according to the required angle. The disadvantages of the first method are: the high-precision pre-positioning tool needs to be designed, meanwhile, the incoming material needs to be placed on the tool manually, the cost is high, the consistency is poor, and the precision is low. The disadvantages of the second method are: if the camera shoots at the end of the mechanical arm, the working rhythm is influenced, the camera is not allowed to be fixed on the frame to shoot under the working condition, and the shooting is easily influenced by shielding of other materials or devices on the material table.
Disclosure of Invention
The utility model provides a last unloading method of robot based on vision positioning, it can realize material accurate positioning and go up unloading, does not need expensive pre-positioning frock again, still reduces the influence to the beat of working.
The robot feeding and discharging method based on visual positioning comprises the following steps:
calibrating the robot eye-hand relationship, and establishing the relationship between a robot image pixel coordinate system and a robot basic coordinate system;
teaching, comprising the steps of:
the robot clamping jaw grabs the material and moves to a fixed photographing point;
through photographing and image recognition positioning, the position (x1, y1) and the material angle theta1 of the material positioning point under the basic coordinate system of the robot at the moment are obtained, and the pose (x0, y0, z0, rx0, ry0 and rz0) of the tail end of the robot at the current moment is obtained;
in a default TCP tool coordinate system of the robot, the material positioning point is taught and translated to be above a workbench positioning tool, namely, the (Xa, Ya, Za,0,0,0) is moved;
a dynamic tool coordinate system TCP1 (x1-x0, y1-y0,0,0,0 and 0) is set for a material positioning point, and the teaching control robot rotates around the TCP1 by a certain angle Rza to enable the material angle to be matched with the angle of the workbench positioning tool;
placing the workpiece;
the loading and unloading work comprises the following steps:
the robot clamping jaw grabs the material and moves to the photographing point;
obtaining the position (x2, y2) and the material angle theta2 of the material positioning point under the basic coordinate system of the robot at the moment through photographing and image recognition positioning;
translating the material positioning point in a default TCP tool coordinate system of the robot (Xa + x2-x1, Ya + y2-y1, Za,0,0, 0);
setting a dynamic tool coordinate system TCP2 (x2-x0, y2-y0,0,0,0 and 0) by using a material positioning point, and controlling the robot to rotate Rza + theta2-theta1 around TCP 2;
and (5) placing the workpiece.
Further, the method for calibrating the hand-eye relationship of the robot comprises the following steps: and shooting a 4-point calibration plate at the shooting point by a camera, mounting a calibration probe at a flange at the tail end of the robot, and obtaining the relation between an image pixel coordinate system and a robot basic coordinate system by adopting a 4-point calibration method.
Further, the method for obtaining the position (x1, y1) and the material angle theta1 of the material positioning point under the basic coordinate system of the robot by photographing and image recognition positioning comprises the following steps:
taking a picture by a camera at the picture taking point;
obtaining pixel coordinates (u1, v1) and a material angle theta1 of the material positioning point in the image through image identification positioning;
and (4) according to the robot eye relation calibration result, converting the robot eye relation calibration result into a position (x1, y1) and an angle theta1 under a robot basic coordinate system.
According to the method, the materials are grabbed and then photographed, the positions and angles of material positioning points and the values of the pose of a robot flange surface and the like at the initial teaching time are obtained through image recognition and positioning and are used as reference bases, then the reference bases are compared in actual working, the positions and angles of the materials are finely adjusted, and finally the materials are aligned with the positions and angles of positioning tools on a workbench, so that the materials can be accurately placed on the workbench.
Compared with the prior art, the beneficial effect of this disclosure is: firstly, a high-precision pre-positioning tool is not required to be designed, so that the tool cost and the labor cost are reduced; the problem that the workpiece cannot be visually positioned in the incoming material area is solved; and thirdly, the positioning can be accurately performed, the running beat of the equipment is improved, and the productivity is improved.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the disclosure.
FIG. 1 shows a schematic diagram of a robot loading and unloading operation according to an exemplary embodiment;
FIG. 2 shows an example of the position and angle of the center of a material hole during robot operation;
FIG. 3 shows a robot loading and unloading workflow diagram in accordance with an exemplary embodiment;
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, it is an exemplary embodiment of the present invention to feed and lower the core of the car recliner to the welding stand in a fixed pose.
The specific requirement is to sleeve the positioning hole of the core piece into the positioning pin of the welding table, namely the blanking pin column. The positioning holes of the core piece and the sections of the blanking pin columns are not necessarily circular, so that the core piece and the blanking pin columns are required to be sleeved in according to a fixed pose; in addition, the positioning holes in the core may be offset and not necessarily located in the geometric center of the circular core; the core material basket is formed by stacking core materials, each stack of core materials is sleeved on a cylindrical pin, and angles of the stacks of core materials are random.
In order to take out the core component from the incoming material frame and accurately sleeve the core component on the positioning pin of the welding table, several methods are available: 1) a camera is fixedly arranged on the support above the core piece feeding frame to shoot and position the core piece, but the positioning and identifying precision is poor due to the influence of the height of the cylindrical pin; 2) a camera is bound at the tail end of the mechanical arm, and photographing is carried out, and the method is also influenced by the height of the cylindrical pin.
In view of this, the embodiment adopts the robot loading and unloading operation method based on visual positioning of the present invention: the core piece was removed from the pin and photographed, identified and then placed as shown in figure 3.
In this embodiment, the robot is configured to: the tail end is provided with a clamping jaw for grabbing the core piece; the camera is above the clamping jaw; the robot clamping jaw tool and the flange surface are not necessarily coaxial (offset tool), and the TCP of the clamping jaw tool cannot be accurately calibrated. In this case, the core member is inserted into the positioning pin of the welding table in a fixed posture, and the specific steps include:
1. and calibrating the hand-eye relationship of the robot, and establishing the relationship between the image pixel coordinate system and the basic coordinate system of the robot.
2. The teaching comprises the following specific steps:
(1) the robot gripping jaw (4 in fig. 1) grips the material (3 in fig. 1) and moves to a fixed photographing point;
moving to a fixed photographing point means that the tail end of the robot reaches the same pose, namely the center of the flange surface reaches a fixed position; however, when the materials are in the charging basket, the materials are sleeved on the cylindrical pins at random angles, so after the materials are taken out, although the centers of the clamping jaws and the flange surface are fixed, the positions and the arrangement angles of the centers of the material holes are not necessarily the same, and by taking the figure 2 as an example, if the materials are only translated and rotated according to the teaching result, the materials cannot be placed on the positioning pins.
(2) Acquiring the position and the angle of the center of the material hole under a basic coordinate system of the robot at the moment through photographing and image recognition positioning;
further, the specific method of the step comprises:
taking a picture by a camera (marked 5 in figure 1) at a picture taking point;
in the embodiment, the geometric center of the material hole is taken as a material positioning point, and the pixel coordinates (u1, v1) and the material angle theta1 (the angle refers to the deflection angle of the material around the center of the material hole and the axial direction of a fixed reference) of the center of the material hole in an image are identified and positioned by using visual software;
converting the position (x1, y1) and the angle theta1 under the basic coordinate system of the robot according to the hand-eye calibration result;
meanwhile, at this moment, the pose of the terminal of the current photo-taking point robot is (x0, y0, z0, rx0, ry0, rz 0);
(3) the center point of the flange surface of the robot is taken as a tool coordinate system of the TCP0 and is also taken as a default TCP of the robot, and the center of the material hole is taught and translated to the position above the pin column in the TCP0, namely, the center of the material hole moves (Xa, Ya, Za,0,0, 0);
(4) a dynamic tool coordinate system TCP1 (x1-x0, y1-y0,0,0,0 and 0) is arranged at the center of the material hole, the teaching control robot rotates around the TCP1 by a certain angle, namely moves by (0,0,0,0,0 and Rza), and the angle of the material hole is matched with the angle of a positioning pin on the workbench;
(5) and (5) placing the workpiece.
3. The working stage comprises the following steps:
(1) a robot clamping jaw (a mark 4 in figure 1) grabs a material (a mark 3 in figure 1) and moves to a photographing point fixed during teaching;
(2) the position (x2, y2) and the angle theta2 of the center of the material hole under the basic coordinate system of the robot are obtained through photographing and image recognition positioning;
(3) translating the center of the material hole (Xa, Ya, Za,0,0,0) by taking the center point of the flange surface of the robot as a TCP0 tool coordinate system;
taking the center point of the flange surface of the robot as a TCP0 tool coordinate system, translating the center of the material hole (x2-x1, y2-y1,0,0,0,0) to realize fine adjustment and rectification;
in practice, these two steps can be combined into one step, i.e. one move (Xa + x2-x1, Ya + y2-y1, Za,0,0, 0).
(4) A dynamic tool coordinate system TCP2 (x2-x0, y2-y0,0,0,0 and 0) is arranged at the center of the material hole, and the robot rotates around the TCP2 for a certain angle, namely moves (0,0,0,0,0 and Rza);
the robot continues to rotate, i.e., move, (0,0,0,0, theta2-theta1) about TCP2 so that the angle of the material hole exactly matches the angle of the locating pin on the table.
In practice, these two steps may also be combined into one step, i.e., one shift (0,0,0,0,0, Rza + theta2-theta 1).
(5) And (5) placing the workpiece.
The preferable calibration method is that a camera shoots a 4-point calibration plate at a shooting point, a calibration probe is installed on a flange at the tail end of the robot, and the relationship between an image pixel coordinate system and a robot basic coordinate system is obtained by adopting a 4-point calibration method.
In addition, when the method is applied to loading and unloading, the positioning point of the material is flexibly selected according to the material and the structure of the positioning tool on the workbench, when the positioning hole/pin arranged in the material is matched with the positioning tool on the workbench, the center of the positioning hole or pin is taken as the positioning point, and if the positioning tool is used for limiting the material from the periphery, the geometric center of the material can be selected as the positioning point.
According to the robot loading and unloading method, a method of grabbing materials and then taking pictures is adopted, values of the position and the angle of the center of a material hole and the pose of a flange face of a robot in the initial teaching are obtained through image recognition and positioning and serve as reference bases, then in actual work, the reference bases are compared to conduct fine adjustment on the position and the angle of the materials, and finally the materials are aligned with the position and the angle of a positioning pin on a workbench, so that the materials are placed on the workbench accurately.
Compared with the prior art, the beneficial effect of this disclosure is:
firstly, a high-precision pre-positioning tool is not required to be designed, so that the tool cost and the labor cost are reduced; the problem that the workpiece cannot be visually positioned in the incoming material area is solved; and thirdly, the positioning can be accurately performed, the running beat of the equipment is improved, and the productivity is improved.
The foregoing is illustrative of the present invention and various modifications and changes in form or detail will readily occur to those skilled in the art based upon the teachings herein and the application of the principles and principles disclosed herein, which are to be regarded as illustrative rather than restrictive on the broad principles of the present invention.
Claims (3)
1. A robot loading and unloading operation method based on visual positioning comprises the following steps:
calibrating the robot eye-hand relationship, and establishing the relationship between a robot image pixel coordinate system and a robot basic coordinate system;
teaching, comprising the steps of:
the robot clamping jaw grabs the material and moves to a fixed photographing point;
through photographing and image recognition positioning, the position (x1, y1) and the material angle theta1 of the material positioning point under the basic coordinate system of the robot at the moment are obtained, and the pose (x0, y0, z0, rx0, ry0 and rz0) of the tail end of the robot at the current moment is obtained;
in a default TCP tool coordinate system of the robot, the material positioning point is taught and translated to be above a workbench positioning tool, namely, the (Xa, Ya, Za,0,0,0) is moved;
a dynamic tool coordinate system TCP1 (x1-x0, y1-y0,0,0,0 and 0) is set for a material positioning point, and the teaching control robot rotates around the TCP1 by a certain angle Rza to enable the material angle to be matched with the angle of the workbench positioning tool;
placing the workpiece;
the loading and unloading work comprises the following steps:
the robot clamping jaw grabs the material and moves to the photographing point;
obtaining the position (x2, y2) and the material angle theta2 of the material positioning point under the basic coordinate system of the robot at the moment through photographing and image recognition positioning;
translating the material positioning point in a default TCP tool coordinate system of the robot (Xa + x2-x1, Ya + y2-y1, Za,0,0, 0);
setting a dynamic tool coordinate system TCP2 (x2-x0, y2-y0,0,0,0 and 0) by using a material positioning point, and controlling the robot to rotate Rza + theta2-theta1 around TCP 2;
and (5) placing the workpiece.
2. The robot loading and unloading operation method as claimed in claim 1, wherein the calibration method of the robot hand-eye relationship comprises the following steps: and shooting a 4-point calibration plate at the shooting point by a camera, mounting a calibration probe at a flange at the tail end of the robot, and obtaining the relation between an image pixel coordinate system and a robot basic coordinate system by adopting a 4-point calibration method.
3. The method for loading and unloading operation on a robot as claimed in claim 1, wherein the method for obtaining the position (x1, y1) and the material angle theta1 of the material positioning point under the basic coordinate system of the robot by photographing and image recognition positioning comprises the steps of:
taking a picture by a camera at the picture taking point;
obtaining pixel coordinates (u1, v1) and a material angle theta1 of the material positioning point in the image through image identification positioning;
and (4) according to the robot eye relation calibration result, converting the robot eye relation calibration result into a position (x1, y1) and an angle theta1 under a robot basic coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010623397.5A CN111633635B (en) | 2020-07-01 | 2020-07-01 | Robot feeding and discharging operation method based on visual positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010623397.5A CN111633635B (en) | 2020-07-01 | 2020-07-01 | Robot feeding and discharging operation method based on visual positioning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111633635A CN111633635A (en) | 2020-09-08 |
CN111633635B true CN111633635B (en) | 2021-12-07 |
Family
ID=72326093
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010623397.5A Active CN111633635B (en) | 2020-07-01 | 2020-07-01 | Robot feeding and discharging operation method based on visual positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111633635B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112692829A (en) * | 2020-12-22 | 2021-04-23 | 程小龙 | Automatic unloading robot of wisdom mill based on 5G network |
CN113610919B (en) * | 2021-07-30 | 2023-10-20 | 深圳明锐理想科技有限公司 | Posture correction method and posture correction system of dust-sticking device |
CN114749981B (en) * | 2022-05-27 | 2023-03-24 | 中迪机器人(盐城)有限公司 | Feeding and discharging control system and method based on multi-axis robot |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108927805A (en) * | 2018-07-25 | 2018-12-04 | 哈尔滨工业大学 | A kind of robot automation's plug pin method of view-based access control model compensation |
CN110539299A (en) * | 2018-05-29 | 2019-12-06 | 北京京东尚科信息技术有限公司 | Robot working method, controller and robot system |
CN110561415A (en) * | 2019-07-30 | 2019-12-13 | 苏州紫金港智能制造装备有限公司 | Double-robot cooperative assembly system and method based on machine vision compensation |
CN111300481A (en) * | 2019-12-11 | 2020-06-19 | 苏州大学 | Robot grabbing pose correction method based on vision and laser sensor |
CN111300422A (en) * | 2020-03-17 | 2020-06-19 | 浙江大学 | Robot workpiece grabbing pose error compensation method based on visual image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018167334A (en) * | 2017-03-29 | 2018-11-01 | セイコーエプソン株式会社 | Teaching device and teaching method |
-
2020
- 2020-07-01 CN CN202010623397.5A patent/CN111633635B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110539299A (en) * | 2018-05-29 | 2019-12-06 | 北京京东尚科信息技术有限公司 | Robot working method, controller and robot system |
CN108927805A (en) * | 2018-07-25 | 2018-12-04 | 哈尔滨工业大学 | A kind of robot automation's plug pin method of view-based access control model compensation |
CN110561415A (en) * | 2019-07-30 | 2019-12-13 | 苏州紫金港智能制造装备有限公司 | Double-robot cooperative assembly system and method based on machine vision compensation |
CN111300481A (en) * | 2019-12-11 | 2020-06-19 | 苏州大学 | Robot grabbing pose correction method based on vision and laser sensor |
CN111300422A (en) * | 2020-03-17 | 2020-06-19 | 浙江大学 | Robot workpiece grabbing pose error compensation method based on visual image |
Also Published As
Publication number | Publication date |
---|---|
CN111633635A (en) | 2020-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111633635B (en) | Robot feeding and discharging operation method based on visual positioning | |
CN113601158B (en) | Bolt feeding pre-tightening system based on visual positioning and control method | |
DE102016009438A1 (en) | Robot system with vision sensor and a large number of robots | |
CN111571190A (en) | Three-dimensional visual automatic assembly system and method | |
CN108927805B (en) | Robot automatic nail inserting method based on visual compensation | |
CN112720458B (en) | System and method for online real-time correction of robot tool coordinate system | |
CN114248086B (en) | Flexible three-dimensional vision-guided robot alignment system and method | |
CN110148187A (en) | A kind of the high-precision hand and eye calibrating method and system of SCARA manipulator Eye-in-Hand | |
CN111145272A (en) | Manipulator and camera hand-eye calibration device and method | |
CN114260903A (en) | 3D visual precise plug-in mounting guide control method for industrial robot with disc type multi-station gripper | |
TW202102347A (en) | Calibration method of vision-guided robot arm only needing to specify a positioning mark in the calibration target to perform calibration | |
CN112238453B (en) | Vision-guided robot arm correction method | |
CN111633649A (en) | Mechanical arm adjusting method and adjusting system thereof | |
CN114074331A (en) | Disordered grabbing method based on vision and robot | |
CN111267094A (en) | Workpiece positioning and grabbing method based on binocular vision | |
CN112598752A (en) | Calibration method based on visual identification and operation method | |
CN112792818B (en) | Visual alignment method for rapidly guiding manipulator to grasp target | |
JPH06187021A (en) | Coordinate correcting method for robot with visual sense | |
CN106530357B (en) | Visual alignment control device and calibration method | |
CN114663400A (en) | Nailing control method and system based on visual positioning seat cushion | |
CN212749627U (en) | Vision-guided material taking and placing device and system | |
CN207509222U (en) | One kind is used for the fertile positioning device of microbit intelligence | |
CN211699034U (en) | Hand-eye calibration device for manipulator and camera | |
CN114571199A (en) | Screw locking machine and screw positioning method | |
CN210731420U (en) | Automatic welding device based on machine vision and AI algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |