CN116038698A - Robot guiding method and device, electronic equipment and storage medium - Google Patents

Robot guiding method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116038698A
CN116038698A CN202211714189.1A CN202211714189A CN116038698A CN 116038698 A CN116038698 A CN 116038698A CN 202211714189 A CN202211714189 A CN 202211714189A CN 116038698 A CN116038698 A CN 116038698A
Authority
CN
China
Prior art keywords
offset
information
operated
distance
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211714189.1A
Other languages
Chinese (zh)
Inventor
白帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shenqishen Network Technology Co ltd
Original Assignee
Shanghai Shenqishen Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shenqishen Network Technology Co ltd filed Critical Shanghai Shenqishen Network Technology Co ltd
Priority to CN202211714189.1A priority Critical patent/CN116038698A/en
Publication of CN116038698A publication Critical patent/CN116038698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the invention provides a robot guiding method, a robot guiding device, electronic equipment and a storage medium, and relates to the technical field of robot control. The robot guiding method comprises the following steps: controlling the mobile platform to move to a preset reference position; after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using the image acquisition equipment; carrying out data analysis on the image data to obtain specified description information; performing error correction on the specified description information; and performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated. Therefore, the repositioning accuracy of the multi-axis robot is improved, so that the repositioning accuracy of the mobile multi-axis robot during working is improved.

Description

Robot guiding method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of robot control technologies, and in particular, to a method and apparatus for guiding a robot, an electronic device, and a storage medium.
Background
Multiaxial robots are used in industrial production and operation, for example, in production lines for processing products using multiaxial robots. In order to meet the demands of practical applications, a multi-axis robot is generally mounted on a mobile platform to form a mobile multi-axis robot, so as to implement mobile operation of the multi-axis robot, for example, the multi-axis robot is mounted on the mobile platform to implement movement of the multi-axis robot to a position to be operated, so as to pick up a target. The moving platform may be an AGV (Automated Guided Vehicle, automatic guided vehicle), AMR (Autonomous Mobile Robot, autonomous moving robot), etc., and may move based on a laser radar or other navigation methods, such as structured light, multi-vision, etc.
However, in practical applications, errors occur in navigation, errors occur in mechanical linkage between the multi-axis robot and the mobile platform, in addition, parking accuracy of a chassis motor of the mobile platform is not high, and linkage between the navigation module and the motor is delayed, so that repositioning accuracy of the mobile platform during operation is not high, and therefore, the repositioning accuracy of the mobile multi-axis robot is not high, and inaccurate reaching of a target position of the mobile multi-axis robot occurs, so that the mobile multi-axis robot cannot operate.
Disclosure of Invention
The embodiment of the invention aims to provide a robot guiding method, a device, electronic equipment and a storage medium, so as to improve repositioning accuracy when a mobile multi-axis robot works. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a robot guiding method, which is applied to a control device, where the control device is in communication connection with a mobile multi-axis robot; the mobile multi-axis robot includes: the system comprises a multi-axis robot and a mobile platform, wherein an image acquisition device is loaded on a mechanical arm of the multi-axis robot, which is provided with an end effector; the method comprises the following steps:
controlling the mobile platform to move to a preset reference position; the reference position is a position of a target to be operated, which is obtained after the target to be operated is subjected to preliminary positioning;
after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using the image acquisition equipment;
carrying out data analysis on the image data to obtain specified description information; the specified descriptive information is descriptive information for representing the relative positional relationship between the target to be operated and the multi-axis robot;
Performing error correction on the specified description information;
and performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated.
Optionally, the specifying description information includes: angular offset information, depth information, and line offset information; the angle offset information is the angle offset information from the image plane of the image acquisition device to the target to be operated in a base station coordinate system, the depth information is the vertical distance from the end effector to the target to be operated in the base station coordinate system, the line offset information is the height distance and the transverse distance from the end effector to the target to be operated in the base station coordinate system, and the base station coordinate system is a coordinate system centered on the multi-axis robot;
performing error correction on the specified description information, including:
performing error correction on the information of each type of the angular offset information, the depth information and the line offset information according to a preset correction sequence based on an error correction mode corresponding to the information of the type;
Wherein the predetermined correction order is a correction order regarding the angular offset information, the depth information, and the line offset information.
Optionally, the predetermined correction sequence includes:
and sequentially performing error correction on the angle offset information, the depth information and the line offset information.
Optionally, the error correction method corresponding to the angular offset information includes:
acquiring the offset direction of the image plane relative to the target to be operated, wherein the offset direction is characterized by the angular offset information;
acquiring an offset angle represented by the angle offset information as a current offset angle;
shifting the current shift angle to a first step length in a direction opposite to the shift direction; the first step length is an angle value related to an offset angle, and the initial value of the first step length is a preset offset angle;
judging whether the offset direction changes after offsetting the first step length;
if the offset direction is not changed, the offset angle obtained after the first step length is offset is used as a new current offset angle, and the step of offsetting the current offset angle to the opposite direction of the offset direction by the first step length is returned until the offset direction is changed;
If the offset direction changes, taking the offset angle obtained after the first step length is offset as a new current offset angle, reducing the first step length in equal proportion to obtain a new first step length, and returning to the step of shifting the current offset angle to the opposite direction of the offset direction by the first step length until the first step length is reduced to be smaller than the angle execution precision; wherein the angle execution accuracy characterizes the accuracy of the angle offset information within a reasonable error range;
and determining the angle offset information of the offset angle obtained by the last offset to obtain corrected angle offset information.
Optionally, the error correction method corresponding to the depth information includes:
determining two reference image areas adjacent to a designated image area and reference depth information corresponding to the two reference image areas from a mapping table of each reference image area and reference depth information corresponding to the target to be operated, which is acquired in advance, so as to obtain a first group of data; the specified image area is an image area of the image data about the object to be operated;
determining two other reference image areas except the reference image areas in the first group of data and reference depth information corresponding to the two other reference areas from the mapping table to obtain a second group of data;
Solving constant parameters in a first linear equation by substituting the first set of data and the second set of data into the first linear equation for constant parameter solving; the first linear equation is a quadratic equation using the image area of any object as an independent variable and the depth information of any object as an independent variable, and the equation contents used when substituting the first set of data and the second set of data into the first linear equation for parameter solving are as follows:
Figure BDA0004020281880000031
Figure BDA0004020281880000032
Figure BDA0004020281880000033
wherein s is 1 、s 2 For two reference image areas, s, in the first set of data 0 、s 3 For two reference image areas in the second set of data,
Figure BDA0004020281880000034
is->
Figure BDA0004020281880000035
p 1 For the reference image area s in the mapping table 1 Corresponding reference depth information, p 2 For the reference image area s in the mapping table 2 Corresponding reference depth information, p 3 For the reference image area s in the mapping table 3 Corresponding reference depth information, p 4 For the reference image area s in the mapping table 4 Corresponding depth information, t is the distance from the normal plane of the end effector to the normal plane of the aperture, a, b and c areConstant parameters to be solved of the first linear equation;
substituting the appointed image area into a first linear equation with the constant parameter solved to obtain a function value;
And correcting the depth information by using the function value to obtain corrected depth information.
Optionally, the error correction method corresponding to the line offset information includes:
estimating the height error and the transverse error of the end effector to the target to be operated;
performing preliminary correction on the line offset information according to the height error and the transverse error to obtain preliminarily corrected line offset information; the primarily corrected line offset information comprises a primarily corrected height distance and a primarily corrected transverse distance;
acquiring the offset direction of the height distance and the transverse distance, which is represented by the line offset information after the preliminary correction;
acquiring the offset distance of the height distance and the transverse distance represented by the line offset information after the preliminary correction as the current offset distance;
shifting the current offset distance to a second step length in a direction opposite to the offset direction; the second step length is a length value related to the offset distance, and the initial value of the second step length is a preset offset distance;
judging whether the offset direction changes after offsetting the second step length;
if the offset direction is not changed, the offset distance obtained after the second step length is used as a new current offset distance, and the step of shifting the current offset distance to the opposite direction of the offset direction by the second step length is returned until the offset direction is changed;
If the offset direction changes, taking the offset distance obtained after the second step length is offset as a new current offset distance, reducing the second step length in an equal proportion to obtain a new second step length, and returning to the step of shifting the current offset distance to the opposite direction of the offset direction by the second step length until the second step length is reduced to be smaller than the distance execution precision; the distance execution precision represents the precision of the line offset information in a reasonable error range;
and determining line offset information of the offset distance obtained by the last offset to obtain corrected line offset information.
Optionally, the estimating the height error and the lateral error of the end effector to the target to be operated includes:
after the position of the multi-axis robot is guided according to the corrected depth information, the image data of the target to be operated is acquired again by utilizing the image acquisition equipment, and auxiliary image data are obtained;
performing data analysis on the auxiliary image data to obtain depth information;
calculating the scaling ratio of the obtained depth information and the corrected depth information to obtain size information;
and estimating the height error and the transverse error of the end effector to the target to be operated by using the size information and the line offset information.
In a second aspect, an embodiment of the present invention provides a robot guiding device applied to a control apparatus, the control apparatus being communicatively connected to a mobile multi-axis robot; the mobile multi-axis robot includes: the system comprises a multi-axis robot and a mobile platform, wherein an end effector of the multi-axis robot is loaded with image acquisition equipment; the device comprises:
the control module is used for controlling the mobile platform to move to a preset reference position; the reference position is a position of a target to be operated, which is obtained after the target to be operated is subjected to preliminary positioning;
the acquisition module is used for acquiring the image data of the target to be operated by using the image acquisition equipment after the mobile platform reaches the reference position;
the analysis module is used for carrying out data analysis on the image data to obtain specified description information; the specified descriptive information is descriptive information for representing the relative positional relationship between the target to be operated and the multi-axis robot;
the error correction module is used for carrying out error correction on the specified description information;
and the guiding module is used for guiding the position of the multi-axis robot according to the corrected specified description information so as to enable the end effector of the multi-axis robot after being guided to reach the operating area aiming at the target to be operated.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the method steps of any robot guidance when executing the program stored in the memory.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements method steps of any robot guidance.
The embodiment of the invention has the beneficial effects that:
the robot guiding method provided by the embodiment of the invention can control the mobile platform to move to the preset reference position; after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using an image acquisition device; carrying out data analysis on the image data to obtain appointed description information; performing error correction on the specified description information; and performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated. Compared with the prior art, the method and the device have the advantages that the image acquisition equipment is additionally configured for the multi-axis robot, the description information representing the position relation between the object to be operated and the multi-axis robot, namely, the appointed description information is obtained by carrying out data analysis on the image data about the object to be operated, which is acquired by the image acquisition equipment, and the appointed description information is corrected, so that the multi-axis robot is guided in position by utilizing the corrected appointed description information, and therefore, the repositioning precision of the multi-axis robot can be improved under the condition that the repositioning precision of a mobile platform is not high, and the repositioning precision of the mobile multi-axis robot during working is improved.
Of course, it is not necessary for any one product or method to practice the invention to achieve all of the advantages described above at the same time
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other embodiments may be obtained according to these drawings to those skilled in the art.
Fig. 1 is a schematic flow chart of a robot guiding method according to an embodiment of the present invention;
fig. 2 is a flow chart of another robot guiding method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a multi-axis robot according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a coordinate system according to an embodiment of the present invention;
FIG. 5 (a) is a schematic diagram of a pinhole imaging method according to an embodiment of the present invention;
FIG. 5 (b) is a schematic diagram of another embodiment of the present invention for pinhole imaging;
fig. 6 is a schematic structural diagram of a robot guiding device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by the person skilled in the art based on the present invention are included in the scope of protection of the present invention.
The following first describes the terms of art according to the embodiments of the present invention:
multiaxis robot: also called as a single-shaft manipulator, an industrial manipulator and the like, is a multi-freedom and multi-purpose manipulator which can realize automatic control and can be programmed repeatedly. The working behavior mode is mainly that the multiaxial synchronous rotation completes the set action.
And (3) a mobile platform: the platform for carrying the multi-axis robot to move can be an automatic guided vehicle AGV or an autonomous mobile robot AMR, and the specific form in actual use can be a trolley.
Mobile multi-axis robot: and the combination of the multi-axis robot and the mobile platform can carry the multi-axis robot, so that the multi-axis robot can work in a movable mode.
World coordinate system: the world coordinate system is the absolute coordinate system of the system, and the coordinates of all points on the screen are determined from the origin of the coordinate system before the user coordinate system is not established.
Base station coordinate system: a coordinate system centered on a multi-axis robot may also be understood as a user coordinate system.
Coaxial camera: a camera is rigidly connected with an end effector of a multi-axis robot, an optical axis of the coaxial camera is parallel to a final-stage axis of a mechanical arm provided with the end effector, and the coaxial camera and the end effector can be regarded as a whole on a macroscopic scale. The optical axis of the coaxial camera is a line passing through the center of the camera; the final axis of the robot arm, where the end effector is located, is a line passing through the extreme center of the robot arm.
It should be noted that, this scheme has certain requirements to the operating environment and the equipment, and the following is a simple introduction:
environmental requirements: the ground of the operation point is flat.
Requirements for mobile platform: the mobile platform can perform approximately accurate repositioning in a plane area, so that the error of a plane line is not more than +/-10 cm, and the error of an angle is not more than +/-20 degrees; the line error is a plane height distance error and a transverse distance error, and the angle error is an offset angle error of the mobile platform relative to the target to be operated.
Requirements for multi-axis robots: the repositioning error is not more than + -0.1 mm.
Requirements for coaxial cameras: resolution is not lower than 2560 x 1440, lens distortion is avoided, and if the ambient light is uncontrollable, a light supplementing device is needed; the lens distortion can be understood as the nonlinearity of the mapping relationship between the spatial coordinates of the object and the image coordinates.
It should be noted that, in this solution, the control device may be configured to guide the mobile platform and the multi-axis robot, where the control device is in communication connection with the mobile platform and the multi-axis robot, or it may be understood that the control device is in communication connection with the mobile multi-axis robot, so long as the control device may send coordinates to the mobile platform for the mobile platform part, and the mobile platform may move to a position indicated by the coordinates; the repositioning accuracy of the multi-axis robot is more accurate, and the coordinate protocols are different according to different types of the multi-axis robot, so that the specific types of the multi-axis robot are not particularly limited.
In addition, the mechanical arm of the multi-axis robot, which is provided with the end effector, is provided with the image acquisition device, and the end effector is contained in the field of view of the image acquisition device. In order to more clearly describe the loading location of the image acquisition apparatus, the following describes an exemplary structure of the multi-axis robot in conjunction with the exemplary structure presented in fig. 3:
As shown in fig. 3, the multi-axis robot may be composed of a plurality of mechanical arms, an image acquisition device 310 and an end effector 320, and in this embodiment, the image acquisition device 310 is mounted on the mechanical arm provided with the end effector 320 as compared with other multi-axis robots.
The image acquisition device 310 is used for acquiring images of the target to be operated; it should be noted that, the specific form of the image capturing device may be a coaxial camera, and the specific form of the image capturing device is not specifically limited in this scheme.
The end effector 320 is configured to operate on a target to be operated.
In addition, compared with the prior art, the embodiment has the advantages that the image acquisition equipment is loaded on the mechanical arm provided with the end effector, excessive deployment of the multi-axis robot is not needed, and therefore the repositioning accuracy of the mobile multi-axis robot is improved finally, so that the scheme is low in cost and convenient to deploy.
After the structure of the multiaxis robot is introduced, the coordinate system proposed in this embodiment is described below, as shown in fig. 4:
in this embodiment, a coordinate system is established according to a left-hand system, wherein the left-hand system is a left-hand rectangular coordinate system in which the thumb of the left hand points in the positive direction of the x-axis and the index finger points in the positive direction of the y-axis, and if the middle finger can point in the positive direction of the z-axis, the coordinate system is called the left-hand rectangular coordinate system.
x, y, z are used to describe the position, rx, rz are used to describe the pose, and the combination of position and pose is called the coordinate system; the gesture is a more conventional term, and is mostly used in the euler coordinate system, and will not be described in detail herein.
After the coordinate system is introduced, the following is a simple description of a preprocessing method before error correction of the specified description information:
before the robot is guided, the appearance of the target to be operated is required to be detected, and detection methods such as polygon detection, hough straight line detection, hough circle detection and the like can be used; for the target to be operated with complex appearance, a neural network algorithm can be used for detection, and the scheme is not particularly limited.
In practical use, the appearance of the object to be operated is detected, so that the situation that the multi-axis robot reaches the operation area of the object to be operated but cannot operate correctly can be avoided. For example, if the multi-axis robot is to be controlled to be guided into the operation area of the switch a, the appearance shape of the switch a is detected in advance, and the appearance shape of the switch a is detected to be a circular switch, so that the end effector of the multi-axis robot can be aligned with the circle center of the switch a in the error correction process.
In addition, this part is not within the scope of the present solution, so too much description will not be made.
In order to improve repositioning accuracy when a mobile multi-axis robot works, the embodiment of the invention provides a robot guiding method, a device, electronic equipment and a storage medium.
The following first describes a robot guiding method provided in the embodiment of the present invention.
The robot guiding method provided by the embodiment of the invention can be applied to control equipment. In a specific application, the control device may be a terminal device or a server. The control device may be an edge computing platform, such as a mobile phone, a personal computer PC or a server, and the embodiment of the invention is not limited to the specific form of the control device.
The control device is in communication connection with the mobile multi-axis robot; the mobile multi-axis robot includes: the multi-axis robot comprises a multi-axis robot and a mobile platform, wherein an image acquisition device is loaded on a mechanical arm of the multi-axis robot, wherein the mechanical arm is provided with an end effector.
The robot guiding method may include:
controlling the mobile platform to move to a preset reference position; the reference position is a position of a target to be operated, which is obtained after the target to be operated is subjected to preliminary positioning;
After the mobile platform reaches the reference position, acquiring image data of the target to be operated by using the image acquisition equipment;
carrying out data analysis on the image data to obtain specified description information; the specified descriptive information is descriptive information for representing the relative positional relationship between the target to be operated and the multi-axis robot;
performing error correction on the specified description information;
and performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated.
The robot guiding method provided by the embodiment of the invention can control the mobile platform to move to the preset reference position; after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using an image acquisition device; carrying out data analysis on the image data to obtain appointed description information; performing error correction on the specified description information; and performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated. Compared with the prior art, the method and the device have the advantages that the image acquisition equipment is additionally configured for the multi-axis robot, the description information representing the position relation between the object to be operated and the multi-axis robot, namely, the appointed description information is obtained by carrying out data analysis on the image data about the object to be operated, which is acquired by the image acquisition equipment, and the appointed description information is corrected, so that the multi-axis robot is guided in position by utilizing the corrected appointed description information, and therefore, the repositioning precision of the multi-axis robot can be improved under the condition that the repositioning precision of a mobile platform is not high, and the repositioning precision of the mobile multi-axis robot during working is improved.
The following describes a robot guiding method according to an embodiment of the present invention with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a robot guiding method applied to a control device, which is communicatively connected with a mobile multi-axis robot; the mobile multi-axis robot includes: the system comprises a multi-axis robot and a mobile platform, wherein an image acquisition device is loaded on a mechanical arm of the multi-axis robot, which is provided with an end effector; steps S101-S105 may be included:
s101, controlling the mobile platform to move to a preset reference position;
the reference position is a position of the target to be operated, which is obtained after the target to be operated is subjected to preliminary positioning;
it can be understood that the mobile platform can be roughly repositioned in the plane area, so that the reference position is obtained, and the repositioning accuracy of the mobile platform is not high because of errors such as laser radar errors, mechanical linkage errors and the like of the mobile platform. Since the reference position is obtained by initially positioning the position of the target to be operated, the distance from the reference position to the target to be operated is required to be within a reasonable error. In addition, the object to be operated may be a switch or an object, etc., and may be changed according to different usage scenarios, which is not particularly limited in the embodiment of the present invention. Illustratively, in the use situation of the assembly line, the moving platform is controlled to move to a preset reference position, and the reference position is the position of the air switch a obtained after the air switch a is initially positioned.
In practical use, the mobile platform is often a trolley, and the automatic guiding transportation trolley is controlled to move to a preset reference position.
It should be noted that, when the control device controls the mobile platform to move to the reference position, the control device needs to send the coordinates of the reference position to the mobile platform, different mobile platforms have different coordinate protocols, and the control device sends the coordinates to the mobile platform according to the coordinate protocols, which is not a protection scope of the scheme, so that redundant description is not made.
S102, after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using the image acquisition equipment;
it should be noted that, after the mobile platform moves to the reference position, the image capturing device may capture image data of the target to be operated, and then the captured image data includes the target to be operated. Since the pixels of the coaxial camera have been defined as described above, the definition of the image data can be ensured. In addition, the image acquisition device and the mechanical arm of the end effector are rigidly connected and can be regarded as a whole in terms of macroscopic view, so the positional relationship between the two is not limited.
For example, after the mobile platform a reaches the reference position, image data of the switch b is acquired using the coaxial camera.
S103, carrying out data analysis on the image data to obtain specified description information;
the specified descriptive information is descriptive information for representing a relative positional relationship between the target to be operated and the multi-axis robot.
Illustratively, specifying the descriptive information may include: the offset angle and the offset direction of the image plane of the image acquisition device under the base coordinate system to the target to be operated, the vertical distance between the end effector under the base coordinate system and the target to be operated, and/or the height distance and the transverse distance between the end effector under the base coordinate system and the target to be operated. Of course, the specification description information is not limited to the above three, and one or two of them may be selected.
It should be noted that, there are various methods for performing data analysis on image data, and most of them are in the prior art, so the process of performing data analysis on image data is not within the scope of the present invention, and therefore the embodiment of the present invention is not limited in particular.
S104, performing error correction on the specified description information;
It can be understood that error correction is performed on the specified description information, corrected specified description information can be obtained, and the multi-axis robot is guided according to the description information, so that the repositioning accuracy of the multi-axis robot can be improved.
It should be noted that, for convenience of understanding, the process of performing error correction on the specified description information is described in other embodiments, so that redundant description is omitted herein.
S105, performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches an operation area aiming at the target to be operated;
the position of the multi-axis robot is guided according to the corrected specified description information, and the multi-axis robot is controlled to reposition according to the corrected specified description information.
After being guided, the end effector of the multi-axis robot may reach the operation region of the target to be operated, thereby finally operating the target to be operated.
Illustratively, in the pipelined scenario, the multi-axis robot a is position guided according to the corrected specified description information, so that the end effector of the guided multi-axis robot a reaches the operation area for the switch b.
The robot guiding method provided by the embodiment of the invention can control the mobile platform to move to the preset reference position; after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using an image acquisition device; carrying out data analysis on the image data to obtain appointed description information; performing error correction on the specified description information; and performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated. Compared with the prior art, the method and the device have the advantages that the image acquisition equipment is additionally configured for the multi-axis robot, the description information representing the position relation between the object to be operated and the multi-axis robot, namely, the appointed description information is obtained by carrying out data analysis on the image data about the object to be operated, which is acquired by the image acquisition equipment, and the appointed description information is corrected, so that the multi-axis robot is guided in position by utilizing the corrected appointed description information, and therefore, the repositioning precision of the multi-axis robot can be improved under the condition that the repositioning precision of a mobile platform is not high, and the repositioning precision of the mobile multi-axis robot during working is improved.
As shown in fig. 2, another robot guiding method provided in the embodiment of the present invention may include steps S201 to S205:
s201, controlling the mobile platform to move to a preset reference position;
s202, after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using the image acquisition equipment;
it should be noted that steps S201, S202 and S205 are the same as steps S101, S102 and S105 described above, and thus will not be described here.
S203, carrying out data analysis on the image data to obtain specified description information; wherein the specification description information includes: angular offset information, depth information, and line offset information;
the angle offset information is the angle offset information from the image plane of the image acquisition device to the target to be operated in a base station coordinate system, the depth information is the vertical distance from the end effector to the target to be operated in the base station coordinate system, the line offset information is the height distance and the transverse distance from the end effector to the target to be operated in the base station coordinate system, and the base station coordinate system is a coordinate system centered on the multi-axis robot;
it can be understood that the angular offset information is the angular offset information from the image plane of the image capturing device to the target to be operated in the base coordinate system, and the corrected angular offset information from the image plane of the image capturing device to the target to be operated should be 0 °, that is, the image plane of the image capturing device and the target to be operated should be parallel; the depth information is the vertical distance from the end effector to the target to be operated under the coordinate system of the base after the angle offset information is corrected, and can be understood as the vertical distance from the end effector to the target to be operated under the overlook angle; the line offset information is the height distance and the lateral distance from the end effector to the target to be operated in the base coordinate system, and it can be understood that the height distance and the lateral distance from the end effector to the target to be operated are obtained from the right front view angle of the multi-axis robot after the depth information is corrected.
S204, according to a preset correction sequence, aiming at each type of information in the angle offset information, the depth information and the line offset information, carrying out error correction on the type of information based on an error correction mode corresponding to the type of information;
wherein the predetermined correction order is a correction order regarding the angular offset information, the depth information, and the line offset information.
The angle offset information, the depth information and the line offset information have corresponding error correction methods, and the angle offset information, the depth information and the line offset information are error-corrected according to a predetermined correction sequence and a corresponding error correction mode.
Optionally, in one implementation, the predetermined correction sequence includes:
and sequentially performing error correction on the angle offset information, the depth information and the line offset information.
It is understood that the angular offset information is corrected first, the depth information is corrected after the angular offset information is corrected, and the line offset information is corrected last after the depth information is corrected.
S205, performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated;
The robot guiding method provided by the embodiment of the invention can control the mobile platform to move to the preset reference position; after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using an image acquisition device; carrying out data analysis on the image data to obtain appointed description information; performing error correction on the information of each type of the angle offset information, the depth information and the line offset information according to a preset correction sequence based on an error correction mode corresponding to the information of the type; and performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated. Then, the scheme carries out error correction on the information of each type based on the error correction mode corresponding to the information of the type according to the correction sequence. Therefore, the scheme can correct errors of each type of information in the specified description information, so that the repositioning accuracy of the multi-axis robot is improved, and the repositioning accuracy of the mobile multi-axis robot during working is improved.
Based on the above robot guiding method, the error correction manner corresponding to each piece of specified description information will be described in a predetermined correction order as follows:
optionally, in one implementation, the error correction method corresponding to the angular offset information includes steps A1-A7:
a1, acquiring an offset direction, relative to the target to be operated, of an image plane, wherein the offset direction is represented by the angular offset information;
wherein the direction of the offset of the image plane with respect to the object to be operated can be characterized by angular offset information.
It is understood that the offset direction may be offset to the left or to the right.
It will be appreciated that under normal conditions the offset direction of the image plane with respect to the object to be operated is 0 °, i.e. the two are parallel, but due to the above described low repositioning accuracy of the mobile platform, an offset direction of the image plane with respect to the object to be operated may occur.
In addition, the offset direction of the image plane relative to the object to be operated, which is characterized by the angular offset information, can also be obtained through calculation. Illustratively, a normal plane in which the end of an image plane points in a direction is obtained, and the image acquisition device is translated leftwards or rightwards on the normal plane, that is, the image acquisition device is translated leftwards or rightwards, and the imaging lengths of the targets to be operated before and after the translation are respectively calculated, so that whether the current end points leftwards or rightwards is obtained.
For a better understanding of the offset direction of the image plane with respect to the object to be operated, reference can be made to the accompanying drawings, as shown in fig. 5 (a) and 5 (b):
in the figure, 510 is an object to be operated in a top view, 520 is an aperture, and 530 is an image plane; the aperture is a device for controlling the quantity of light transmitted through the lens and entering the photosensitive surface in the body, and is usually in the lens.
Fig. 5 (a) is a view showing the offset direction of the image plane relative to the object to be operated, which are parallel; fig. 5 (b) shows the offset direction of the image plane with respect to the target to be operated when an error occurs, and the offset direction is offset to the right as shown in fig. 5 (b).
It should be noted that, for convenience of understanding, the image plane and the object to be operated may be regarded as aperture imaging, and the aperture may be regarded as an aperture in aperture imaging.
A2, acquiring an offset angle represented by the angle offset information as a current offset angle;
it should be noted that the angular offset information may not only characterize an offset direction with respect to the image plane with respect to the object to be operated, but also characterize an offset angle, and use the offset angle as the current offset angle.
It will be appreciated that the offset angle may also be referred to as an initial angle error, and that the specific nomenclature is not specifically limited in embodiments of the present invention.
By way of example, the angular offset information may characterize the image plane as being offset to the right by 30 ° with respect to the object to be operated.
A3, shifting the current shifting angle to a first step length in the direction opposite to the shifting direction;
the first step length is an angle value related to an offset angle, and the initial value of the first step length is a preset offset angle;
it can be understood that the initial value of the first step is a preset offset angle, and may be changed according to the magnitude of the current offset angle, which is not specifically limited in the embodiment of the present invention.
For example, if the current offset angle is 20 °, the offset direction is offset to the right, and the preset first step is 5 °, then the first step may be offset to the opposite direction of the offset direction, i.e., offset to the left by 5 °.
Step A4, judging whether the offset direction changes after offsetting the first step length;
it will be appreciated that after shifting the first step, either the shift direction changes or the shift direction does not change.
For example, if the current offset angle is 20 °, the offset direction is offset to the right, and the preset first step is 5 °, then the offset direction is still offset to the right after the first step is offset to the left, so the offset direction is unchanged; if the current offset angle is 3 degrees and the offset direction is offset to the right, the preset first step is 5 degrees, and the offset direction is offset to the left after the offset of the first step, so the offset direction is changed.
Step A5, if the offset direction is not changed, returning the offset angle obtained after the first step length is offset as a new current offset angle to the step of offsetting the current offset angle to the opposite direction of the offset direction by the first step length until the offset direction is changed;
it will be understood that, in the case where the offset direction is not changed, if the offset direction is not changed, the offset angle obtained after the offset by the first step is used as the new current offset angle, and the above steps are returned until the offset direction is changed.
For example, if the current offset angle is 20 °, the offset direction is offset to the right, and the preset first step is 5 °, then the offset direction is still offset to the right after the first step is offset to the left, and no change occurs, where the offset angle is 15 °, 15 ° is taken as a new current offset angle, and 15 ° is further offset to the left by 5 °, and 10 ° is taken as a new current offset angle, until the offset direction changes.
Step A6, if the offset direction changes, taking the offset angle obtained after the offset of the first step as a new current offset angle, reducing the first step in an equal proportion to obtain a new first step, and returning to the step of shifting the current offset angle to the opposite direction of the offset direction by the first step until the first step is reduced to be smaller than the angle execution precision;
Wherein the angle execution accuracy characterizes the accuracy of the angle offset information within a reasonable error range;
it can be understood that, in the case of introducing the change of the offset direction, if the offset direction is changed, the offset angle obtained after the first step is offset is taken as a new current offset angle, the first step is reduced in equal proportion to obtain a new first step, and the step of offsetting the current offset angle to the opposite direction of the offset direction by the first step is returned until the first step is reduced to be smaller than the angle execution precision.
It should be noted that the specific ratio of the equal ratio reduction may be adjusted according to the actual use situation, which is not particularly limited in the embodiment of the present invention.
It can be understood that the angle execution precision is usually the precision within the error reasonable range preset in advance, and in the normal case, the first step length is reduced by 4-8 times in an equal proportion, so that the first step length can reach the angle execution precision, and the angle execution precision is usually +/-0.5 degrees, which is not particularly limited in the embodiment of the invention.
For example, if the current offset angle is 3 °, the offset direction is offset to the right, and the preset first step is 5 °, then the offset direction is offset to the left after the offset of the first step, so that the offset direction changes, at this time, the offset direction will be offset to the left by 2 °, and as a new current offset angle, the first step is reduced by one half in equal proportion, to obtain a new first step by 2.5 °, and the step of offsetting the current offset angle to the opposite direction of the offset direction by the first step is returned until the first step is reduced to less than the angle execution precision ±0.5°.
And A7, determining the angle offset information of the offset angle obtained by the last offset to obtain corrected angle offset information.
It is understood that the angle offset information of the offset angle obtained by the last offset is used as the corrected angle offset information, so that the angle offset information within the accuracy of angle execution can be obtained.
The angle offset information of the offset angle obtained by the last offset is, for example, offset 0.1 ° to the left, and is taken as corrected angle offset information.
It should be noted that, the error correction method of the whole angular offset information may be understood as a bisection method of iterative approximation, and the angular offset information is subjected to error correction for a plurality of times by performing iterative approximation on the first step length, so as to obtain corrected angular offset information.
The error correction mode corresponding to the angular offset information provided by the embodiment of the invention can be used for correcting through iterative approximation of the angular offset information, so that corrected angular offset information is obtained. The accuracy of the corrected angular offset information is approximated over a number of iterations until the angular offset information is within the angular execution accuracy, i.e., within a reasonable range of error of the angular offset information. Therefore, the embodiment of the invention can effectively improve the angular offset information precision of the multi-axis robot, thereby improving the repositioning precision of the multi-axis robot, and improving the repositioning precision of the mobile multi-axis robot when working.
Optionally, in one implementation, the error correction method corresponding to the depth information includes steps B1-B5:
step B1, determining two reference image areas adjacent to a designated image area and reference depth information corresponding to the two reference image areas from a mapping table of each reference image area and reference depth information corresponding to the target to be operated, which is acquired in advance, so as to obtain a first group of data; the specified image area is an image area of the image data about the object to be operated;
it will be appreciated that correction of depth information may be understood as correction on the y-axis of the base station coordinates.
It should be noted that, the reference depth information corresponding to the reference area has a corresponding relationship, the corresponding relationship is similar to the principle of pinhole imaging, when the reference image area is larger, the corresponding reference depth information is smaller, otherwise, when the reference image area is smaller, the corresponding reference depth information is larger. The reference image area is the image area corresponding to the target to be operated, which is acquired in advance, and the reference depth information is the vertical distance from the end effector to the target to be operated under the base coordinate system, which corresponds to the reference image area, which is acquired in advance.
For example, when the depth information is 100mm, the image area obtained by image acquisition is 41136pix; when the depth information is 105mm, the image area obtained by image acquisition is 38470pix.
The reference image area and the reference depth information are information in a mapping table of the corresponding relation between the reference image area and the reference depth information, which is acquired in advance, and then before error correction is performed, a mapping table of each reference image area and the reference depth information corresponding to the target to be operated needs to be established, and the mapping table can be obtained through multiple tests and records. In addition, since the reference image area is a pixel area, unit conversion is required.
Figure BDA0004020281880000161
Figure BDA0004020281880000171
TABLE 1
As shown in table 1, the reference image area is the pixel area in table 1, and the reference depth information is the depth in table 1, it is understood that two reference image areas adjacent to the designated image area and the reference depth information corresponding to the two reference image areas may be determined based on the information in table 1. It should be noted that, each target to be operated has a mapping table of reference image area and reference depth information corresponding to the target to be operated, and data in each mapping table is collected in advance.
For example, if the specified image area is 20000pix, then the image area 20232pix may be selected as the reference image area and its corresponding depth information 160mm as the reference depth information, and the image area 19279pix may be selected as the reference image area and its corresponding depth information 165mm as the reference depth information, thereby obtaining the first set of data.
Step B2, determining two other reference image areas except the reference image area in the first group of data and the reference depth information corresponding to the two other reference areas from the mapping table to obtain a second group of data;
it will be appreciated that the other two reference image areas selected are not necessarily adjacent to the image areas in the first set of data, and the second set of data is not particularly limited in the embodiments of the present invention.
For example, if the specified image area is 20000pix, then the image area 20232pix may be selected as the reference image area and its corresponding depth information 160mm as the reference depth information, and the image area 19279pix may be selected as the reference image area and its corresponding depth information 165mm as the reference depth information, thereby obtaining the first set of data. Then, two other reference image areas other than the first set of data and reference depth information corresponding to the two other reference areas may be selected, and the image area 18349pix, and its corresponding depth information 170mm, may be selected as the reference image area, and the image area 22410pix, and its corresponding depth information 150mm, may be selected as the reference depth information, thereby obtaining the second set of data.
Step B3, solving constant parameters in the first linear equation by substituting the first set of data and the second set of data into the first linear equation to solve the constant parameters; the first linear equation is a quadratic equation using the image area of any object as an independent variable and the depth information of any object as an independent variable, and the equation contents used when substituting the first set of data and the second set of data into the first linear equation for parameter solving are as follows:
Figure BDA0004020281880000172
Figure BDA0004020281880000181
Figure BDA0004020281880000182
wherein s is 1 、s 2 For two reference image areas, s, in the first set of data 0 、s 3 For two reference image areas in the second set of data,
Figure BDA0004020281880000183
is->
Figure BDA0004020281880000184
p 1 For the reference image area s in the mapping table 1 Corresponding reference depth information, p 2 For the reference image area s in the mapping table 2 Corresponding reference depth information, p 3 For the reference image area s in the mapping table 3 Corresponding reference depth information, p 4 For the reference image area s in the mapping table 4 Corresponding depth information, t is the distance from the normal plane of the end effector to the normal plane of the aperture, and a, b and c are constant parameters to be solved of the first linear equation;
it will be appreciated that the first set of data may be substituted into the first and second of the first linear equations, and that it is then also understood that two parameters a, b and c are considered as a function of the other parameter; the second set of data can be substituted into a third equation in the first linear equation, so that the optimization of a scalar function can be obtained, and a unitary one-time equation is obtained by directly deriving the optimization, so that a, b and c are obtained; d is an important parameter in the equation, consisting of depth information corresponding to the image area, i.e., p, and the distance from the normal plane of the end effector to the normal plane of the aperture, i.e., t, where t is a constant. It should be noted that, the result of the first and second equations in the first linear equation is equal to 0, which is characterized by two reference image areas in the first set of data and their corresponding d, and the points formed by the two reference image areas are all on the first linear equation, while the third equation substituted by the second set of data is essentially an optimization of a scalar function, and the purpose of the third equation is to make the two reference image areas in the second set of data and their corresponding d, and the points formed by the two reference image areas approximate the first linear equation as much as possible, and it can also be understood that the first linear equation satisfies the points represented by the two sets of data adjacent to the image areas in the image data, and approximates the points represented by the other two sets of data on the first linear equation.
Exemplary, the designated image area is 20785pix, two reference image areas adjacent to the designated image area and the reference depth information corresponding to the two reference image areas are determined from the mapping table, a first set of data is obtained, the reference image area 21361pix in the first set of data and the corresponding reference depth information 155mm are substituted into the first linear equation
Figure BDA0004020281880000185
Reference image area 20232pix, and its corresponding reference depth information 160mm, are substituted into +_in the first linear equation>
Figure BDA0004020281880000186
Determining two other reference image areas except the reference image area in the first group of data and the reference depth information corresponding to the two other reference image areas from the mapping table to obtain a second group of data, substituting the reference image area 22410pix of the second group of data and the corresponding depth information 150mm, the reference image area 19279pix and the corresponding depth information 165mm of the second group of data into the first linear equation
Figure BDA0004020281880000191
Thereby solving the constant parameters a, b and c in the first linear equation.
Step B4, substituting the appointed image area into a first linear equation with the constant parameter solved to obtain a function value;
it will be appreciated that substituting the specified image area into the constant parameter In the first linear equation with the number solved, d can be obtained, since
Figure BDA0004020281880000192
The function value p, i.e., depth information, can be obtained.
Step B5, correcting the depth information by using the function value to obtain corrected depth information;
it is understood that the depth information is corrected by using the function value obtained as described above, thereby obtaining corrected depth information.
Illustratively, the function value is 100mm, the current depth information is 120mm, and the current depth information is corrected to the function value, thereby obtaining corrected depth information of 100mm.
The error correction mode corresponding to the depth information provided by the embodiment of the invention can be directly calculated through the first linear equation, so that the corrected depth information is obtained, and the depth information is within a reasonable error range. Therefore, the embodiment of the invention can effectively improve the angular offset information precision of the multi-axis robot, thereby improving the repositioning precision of the multi-axis robot, and improving the repositioning precision of the mobile multi-axis robot when working.
Optionally, in one implementation, the error correction method corresponding to the line offset information includes steps C1-C9:
Step C1, estimating the height error and the transverse error from the end effector to the target to be operated;
it will be appreciated that after the depth information correction is completed, the line offset information may be error corrected, the line offset information occurring only in the xz plane in the base coordinate system.
In one implementation, estimating the height error and lateral error of the end effector to the target to be operated may include steps C11-C14:
step C11, after the position of the multi-axis robot is guided according to the corrected depth information, the image data of the target to be operated is acquired again by using the image acquisition equipment, and auxiliary image data are obtained;
it will be appreciated that after the multi-axis robot is position guided according to the corrected depth information, image data of the object to be operated may be acquired again by the image acquisition apparatus, thereby obtaining auxiliary image data, the depth information and the angular offset information of which are corrected.
Step C12, carrying out data analysis on the auxiliary image data to obtain depth information;
it will be appreciated that the content of the data analysis of the auxiliary image data has been described, and will not be described in detail herein.
Step C13, calculating the scaling ratio of the obtained depth information and the corrected depth information to obtain size information;
the depth information obtained by the auxiliary image analysis is in units of pix as pixels, and requires unit conversion.
It will be appreciated that comparing the depth information from the auxiliary image analysis with the corrected depth information may result in a scaling, which is scale information.
By way of example, the depth information from the auxiliary image analysis is 10mm and the corrected depth information is 100mm, then the scale information is 1:10.
And step C14, estimating the height error and the transverse error of the end effector to the target to be operated by using the size information and the line offset information.
It will be appreciated that the height error and lateral error of the end effector to the target to be manipulated may be estimated based on the scale information and line offset information obtained from analysis of the image data.
By way of example, the line offset information obtained from the image data analysis characterizes the end effector to the target to be operated with a height error of 5mm and a lateral error of 10mm, and then the line offset information obtained from the image data analysis may be divided by the scale information to obtain a predicted end effector to target to be operated with a height error of 50mm and a lateral error of 100mm.
Step C2, carrying out preliminary correction on the line offset information according to the height error and the transverse error to obtain the line offset information after preliminary correction;
the primarily corrected line offset information comprises a primarily corrected height distance and a primarily corrected lateral distance.
Illustratively, the line offset information is primarily corrected according to the height error of 50mm and the transverse error of 100mm, so as to obtain the primarily corrected line offset information.
Step C3, obtaining the offset direction of the height distance and the transverse distance, which is represented by the line offset information after the preliminary correction;
wherein the offset direction with respect to the height distance and the lateral distance can be characterized by line offset information.
It is understood that the shift direction of the lateral distance in the line shift information may be left shift or right shift; the shift direction of the height distance in the line shift information may be an upward shift or a downward shift.
Since the line shift information is two-dimensional, i.e., the height distance and the lateral distance can be used to correct the error simultaneously in the error correction process.
Step C4, acquiring the offset distance of the height distance and the transverse distance represented by the line offset information after the preliminary correction as the current offset distance;
It should be noted that, the linearity information may not only represent the offset direction about the height distance and the lateral distance, but also represent the offset distance of the height distance and the lateral distance, and use the offset distance as the current offset distance.
By way of example, the line offset information may characterize a lateral distance offset to the right of 30mm and a height distance offset to the top of 20mm.
Step C5, shifting the current offset distance to a second step length in the direction opposite to the offset direction;
the second step length is a length value related to the offset distance, and the initial value of the second step length is a preset offset distance.
It can be appreciated that the initial value of the second step is a preset offset distance, and may be changed according to the magnitude of the current offset distance, which is not specifically limited in the embodiment of the present invention.
For example, if the current offset distance of the lateral distance is 20mm and the offset direction is offset to the right, the preset second step is 5mm, then the second step may be offset to the opposite direction of the offset direction, i.e., offset to the left by 5mm; if the current offset distance of the height distance is 20mm, the offset direction is an upward offset, and the preset second step length is 5mm, the second step length can be offset in the opposite direction of the offset direction, namely, the second step length is offset downwards by 5mm; .
Step C6, judging whether the offset direction changes after offsetting the second step length;
it will be appreciated that after shifting the second step, either the shift direction changes or the shift direction does not change.
For example, if the current offset distance of the lateral distance is 20mm, the offset direction is offset to the right, and the preset second step length is 5mm, then the offset direction is still offset to the right after the second step length is offset to the left, so the offset direction is unchanged; if the current offset distance of the lateral distance is 3mm, the offset direction is offset to the right, and the preset second step length is 5mm, the offset direction is offset to the left after the offset of the second step length, so the offset direction is changed.
Step C7, if the offset direction is not changed, returning the offset distance obtained after the second step length is offset as a new current offset distance to the step of offsetting the current offset distance to the opposite direction of the offset direction by the second step length until the offset direction is changed;
it will be understood that, in the case where the offset direction is not changed, the offset distance obtained after the second step is offset is taken as the new current offset distance, and the above steps are returned until the offset direction is changed.
For example, if the current offset distance of the lateral distance is 20mm, the offset direction is offset to the right, and the preset second step length is 5mm, then the offset direction is still offset to the right after the second step length is offset to the left, and no change occurs, where the offset distance is 15mm, 15mm is taken as the new current offset distance, and 15mm is continued to be offset to the left by 5mm, and 10mm is taken as the new current offset distance, until the offset direction changes.
Step C8, if the offset direction changes, taking the offset distance obtained after the second step length is offset as a new current offset distance, and reducing the second step length in equal proportion to obtain a new second step length, and returning to the step of shifting the current offset distance to the opposite direction of the offset direction by the second step length until the second step length is reduced to be smaller than the distance execution precision;
the distance execution precision represents the precision of the line offset information in a reasonable error range;
it will be understood that, in the above step, when the offset direction is changed, the offset distance obtained after the second step is offset is taken as the new current offset distance, the second step is reduced in equal proportion to obtain the new second step, and the step of offsetting the current offset distance by the second step in the opposite direction of the offset direction is returned until the second step is reduced to be smaller than the distance execution precision.
It should be noted that the specific ratio of the equal ratio reduction may be adjusted according to the actual use situation, which is not particularly limited in the embodiment of the present invention.
It can be understood that the distance execution precision is usually preset in advance within a reasonable error range, and in a normal case, the second step length is reduced by 4-6 times in an equal proportion, so that the second step length can reach the distance execution precision, and the distance execution precision is usually +/-0.2 mm, which is not particularly limited in the embodiment of the invention.
For example, if the current offset distance of the lateral distance is 3mm, the offset direction is offset to the right, and the preset second step length is 5mm, then the offset direction is offset to the left after the offset of the second step length, so that the offset direction changes, at this time, the offset direction is offset to the left by 2mm, and as a new current offset distance, the second step length is reduced by one half in equal proportion, so as to obtain a new second step length of 2.5mm, and the step of offsetting the current offset distance to the opposite direction of the offset direction by the second step length is returned until the second step length is reduced to be less than the distance execution precision of +/-0.2 mm.
And step C9, determining line offset information of the offset distance obtained by the last offset to obtain corrected line offset information.
It is understood that the line shift information of the shift distance obtained by the last shift is used as the corrected line shift information, so that the line shift information within the distance execution accuracy can be obtained.
Illustratively, the line shift information of the shift distance obtained by the last shift is shifted to the left by 0.1mm, shifted upward by 0.2mm, and is taken as corrected line shift information.
It should be noted that, the error correction method of the whole line offset information can be understood as a used bi-directional iterative approximation bisection method, and the line offset information is subjected to error correction for a plurality of times by performing iterative approximation on the second step length, so as to obtain corrected line offset information.
According to the error correction mode corresponding to the line offset information provided by the embodiment of the invention, since the line offset information comprises the height distance and the transverse distance, the line offset information can be corrected in a bidirectional iterative approximation mode, so that corrected line offset information can be obtained. The accuracy of the corrected line offset information is approximated through a number of bi-directional iterations until the line offset information is within the distance execution accuracy, i.e., within a reasonable range of error of the line offset information. Therefore, the embodiment of the invention can effectively improve the line offset information precision of the multi-axis robot, thereby improving the repositioning precision of the multi-axis robot, and improving the repositioning precision of the mobile multi-axis robot when working.
Based on the above method embodiment, as shown in fig. 6, an embodiment of the present invention provides a robot guiding device, which is applied to a control apparatus, where the control apparatus is communicatively connected to a mobile multi-axis robot; the mobile multi-axis robot includes: the system comprises a multi-axis robot and a mobile platform, wherein an image acquisition device is loaded on a mechanical arm of the multi-axis robot, which is provided with an end effector; the device comprises:
a control module 610, configured to control the mobile platform to move to a predetermined reference position; the reference position is a position of a target to be operated, which is obtained after the target to be operated is subjected to preliminary positioning;
an acquisition module 620, configured to acquire image data of the target to be operated by using the image acquisition device after the mobile platform reaches the reference position;
the analysis module 630 is configured to perform data analysis on the image data to obtain specified description information; the specified descriptive information is descriptive information for representing the relative positional relationship between the target to be operated and the multi-axis robot;
an error correction module 640, configured to perform error correction on the specified description information;
And a guiding module 650, configured to guide the position of the multi-axis robot according to the corrected specified description information, so that the end effector of the multi-axis robot after being guided reaches the operation area for the target to be operated.
Optionally, the specifying description information includes: angular offset information, depth information, and line offset information; the angle offset information is the angle offset information from the image plane of the image acquisition device to the target to be operated in a base station coordinate system, the depth information is the vertical distance from the end effector to the target to be operated in the base station coordinate system, the line offset information is the height distance and the transverse distance from the end effector to the target to be operated in the base station coordinate system, and the base station coordinate system is a coordinate system centered on the multi-axis robot;
the error correction module includes:
an error correction sub-module, configured to perform error correction on each type of information of the angular offset information, the depth information, and the line offset information according to a predetermined correction order, based on an error correction manner corresponding to the type of information;
Wherein the predetermined correction order is a correction order regarding the angular offset information, the depth information, and the line offset information.
Optionally, the predetermined correction sequence includes:
and sequentially performing error correction on the angle offset information, the depth information and the line offset information.
Optionally, the error correction method corresponding to the angular offset information includes:
acquiring the offset direction of the image plane relative to the target to be operated, wherein the offset direction is characterized by the angular offset information;
acquiring an offset angle represented by the angle offset information as a current offset angle;
shifting the current shift angle to a first step length in a direction opposite to the shift direction; the first step length is an angle value related to an offset angle, and the initial value of the first step length is a preset offset angle;
judging whether the offset direction changes after offsetting the first step length;
if the offset direction is not changed, the offset angle obtained after the first step length is offset is used as a new current offset angle, and the step of offsetting the current offset angle to the opposite direction of the offset direction by the first step length is returned until the offset direction is changed;
If the offset direction changes, taking the offset angle obtained after the first step length is offset as a new current offset angle, reducing the first step length in equal proportion to obtain a new first step length, and returning to the step of shifting the current offset angle to the opposite direction of the offset direction by the first step length until the first step length is reduced to be smaller than the angle execution precision; wherein the angle execution accuracy characterizes the accuracy of the angle offset information within a reasonable error range;
and determining the angle offset information of the offset angle obtained by the last offset to obtain corrected angle offset information.
Optionally, the error correction method corresponding to the depth information includes:
determining two reference image areas adjacent to a designated image area and reference depth information corresponding to the two reference image areas from a mapping table of each reference image area and reference depth information corresponding to the target to be operated, which is acquired in advance, so as to obtain a first group of data; the specified image area is an image area of the image data about the object to be operated;
determining two other reference image areas except the reference image areas in the first group of data and reference depth information corresponding to the two other reference areas from the mapping table to obtain a second group of data;
Solving constant parameters in a first linear equation by substituting the first set of data and the second set of data into the first linear equation for constant parameter solving; the first linear equation is a quadratic equation using the image area of any object as an independent variable and the depth information of any object as an independent variable, and the equation contents used when substituting the first set of data and the second set of data into the first linear equation for parameter solving are as follows:
Figure BDA0004020281880000241
Figure BDA0004020281880000242
Figure BDA0004020281880000243
wherein s is 1 、s 2 For two reference image areas, s, in the first set of data 0 、s 3 Is the second oneTwo reference image areas in the group data,
Figure BDA0004020281880000251
is->
Figure BDA0004020281880000252
p 1 For the reference image area s in the mapping table 1 Corresponding reference depth information, p 2 For the reference image area s in the mapping table 2 Corresponding reference depth information, p 3 For the reference image area s in the mapping table 3 Corresponding reference depth information, p 4 For the reference image area s in the mapping table 4 Corresponding depth information, t is the distance from the normal plane of the end effector to the normal plane of the aperture, and a, b and c are constant parameters to be solved of the first linear equation;
substituting the appointed image area into a first linear equation with the constant parameter solved to obtain a function value;
And correcting the depth information by using the function value to obtain corrected depth information.
Optionally, the error correction method corresponding to the line offset information includes:
estimating the height error and the transverse error of the end effector to the target to be operated;
performing preliminary correction on the line offset information according to the height error and the transverse error to obtain preliminarily corrected line offset information; the primarily corrected line offset information comprises a primarily corrected height distance and a primarily corrected transverse distance;
acquiring the offset direction of the height distance and the transverse distance, which is represented by the line offset information after the preliminary correction;
acquiring the offset distance of the height distance and the transverse distance represented by the line offset information after the preliminary correction as the current offset distance;
shifting the current offset distance to a second step length in a direction opposite to the offset direction; the second step length is a length value related to the offset distance, and the initial value of the second step length is a preset offset distance;
judging whether the offset direction changes after offsetting the second step length;
if the offset direction is not changed, the offset distance obtained after the second step length is used as a new current offset distance, and the step of shifting the current offset distance to the opposite direction of the offset direction by the second step length is returned until the offset direction is changed;
If the offset direction changes, taking the offset distance obtained after the second step length is offset as a new current offset distance, reducing the second step length in an equal proportion to obtain a new second step length, and returning to the step of shifting the current offset distance to the opposite direction of the offset direction by the second step length until the second step length is reduced to be smaller than the distance execution precision; the distance execution precision represents the precision of the line offset information in a reasonable error range;
and determining line offset information of the offset distance obtained by the last offset to obtain corrected line offset information.
Optionally, the estimating the height error and the lateral error of the end effector to the target to be operated includes:
after the position of the multi-axis robot is guided according to the corrected depth information, the image data of the target to be operated is acquired again by utilizing the image acquisition equipment, and auxiliary image data are obtained;
performing data analysis on the auxiliary image data to obtain depth information;
calculating the scaling ratio of the obtained depth information and the corrected depth information to obtain size information;
and estimating the height error and the transverse error of the end effector to the target to be operated by using the size information and the line offset information.
The embodiment of the present invention further provides an electronic device, as shown in fig. 7, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702, and the memory 703 perform communication with each other through the communication bus 704,
a memory 703 for storing a computer program;
the processor 701 is configured to implement the steps of the robot guiding method according to the embodiment of the present invention when executing the program stored in the memory 703.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of any of the robot guiding methods described above.
In a further embodiment of the present invention, a computer program product comprising instructions which, when run on a computer, cause the computer to perform any of the robot guiding methods of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A robot guiding method, characterized by being applied to a control device, which is communicatively connected to a mobile multi-axis robot; the mobile multi-axis robot includes: the system comprises a multi-axis robot and a mobile platform, wherein an image acquisition device is loaded on a mechanical arm of the multi-axis robot, which is provided with an end effector; the method comprises the following steps:
controlling the mobile platform to move to a preset reference position; the reference position is a position of a target to be operated, which is obtained after the target to be operated is subjected to preliminary positioning;
after the mobile platform reaches the reference position, acquiring image data of the target to be operated by using the image acquisition equipment;
carrying out data analysis on the image data to obtain specified description information; the specified descriptive information is descriptive information for representing the relative positional relationship between the target to be operated and the multi-axis robot;
Performing error correction on the specified description information;
and performing position guidance on the multi-axis robot according to the corrected specified description information so that the end effector of the guided multi-axis robot reaches the operation area aiming at the target to be operated.
2. The method of claim 1, wherein the specified description information comprises: angular offset information, depth information, and line offset information; the angle offset information is the angle offset information from the image plane of the image acquisition device to the target to be operated in a base station coordinate system, the depth information is the vertical distance from the end effector to the target to be operated in the base station coordinate system, the line offset information is the height distance and the transverse distance from the end effector to the target to be operated in the base station coordinate system, and the base station coordinate system is a coordinate system centered on the multi-axis robot;
performing error correction on the specified description information, including:
performing error correction on the information of each type of the angular offset information, the depth information and the line offset information according to a preset correction sequence based on an error correction mode corresponding to the information of the type;
Wherein the predetermined correction order is a correction order regarding the angular offset information, the depth information, and the line offset information.
3. The method of claim 2, wherein the predetermined correction sequence comprises:
and sequentially performing error correction on the angle offset information, the depth information and the line offset information.
4. The method of claim 2, wherein the error correction means corresponding to the angular offset information comprises:
acquiring the offset direction of the image plane relative to the target to be operated, wherein the offset direction is characterized by the angular offset information;
acquiring an offset angle represented by the angle offset information as a current offset angle;
shifting the current shift angle to a first step length in a direction opposite to the shift direction; the first step length is an angle value related to an offset angle, and the initial value of the first step length is a preset offset angle;
judging whether the offset direction changes after offsetting the first step length;
if the offset direction is not changed, the offset angle obtained after the first step length is offset is used as a new current offset angle, and the step of offsetting the current offset angle to the opposite direction of the offset direction by the first step length is returned until the offset direction is changed;
If the offset direction changes, taking the offset angle obtained after the first step length is offset as a new current offset angle, reducing the first step length in equal proportion to obtain a new first step length, and returning to the step of shifting the current offset angle to the opposite direction of the offset direction by the first step length until the first step length is reduced to be smaller than the angle execution precision; wherein the angle execution accuracy characterizes the accuracy of the angle offset information within a reasonable error range;
and determining the angle offset information of the offset angle obtained by the last offset to obtain corrected angle offset information.
5. The method of claim 2, wherein the error correction means corresponding to the depth information comprises:
determining two reference image areas adjacent to a designated image area and reference depth information corresponding to the two reference image areas from a mapping table of each reference image area and reference depth information corresponding to the target to be operated, which is acquired in advance, so as to obtain a first group of data; the specified image area is an image area of the image data about the object to be operated;
determining two other reference image areas except the reference image areas in the first group of data and reference depth information corresponding to the two other reference areas from the mapping table to obtain a second group of data;
Solving constant parameters in a first linear equation by substituting the first set of data and the second set of data into the first linear equation for constant parameter solving; the first linear equation is a quadratic equation using the image area of any object as an independent variable and the depth information of any object as an independent variable, and the equation contents used when substituting the first set of data and the second set of data into the first linear equation for parameter solving are as follows:
Figure FDA0004020281870000021
Figure FDA0004020281870000022
Figure FDA0004020281870000023
wherein s is 1 、s 2 For two reference image areas, s, in the first set of data 0 、s 3 For two reference image areas in the second set of data,
Figure FDA0004020281870000024
is->
Figure FDA0004020281870000025
p 1 For the reference image area s in the mapping table 1 Corresponding reference depth information, p 2 For the reference image area s in the mapping table 2 Corresponding reference depth information, p 3 For the reference image area s in the mapping table 3 Corresponding reference depth information, p 4 For the reference image area s in the mapping table 4 Corresponding depth information, t is the distance from the normal plane of the end effector to the normal plane of the aperture, and a, b and c are constant parameters to be solved of the first linear equation;
substituting the appointed image area into a first linear equation with the constant parameter solved to obtain a function value;
And correcting the depth information by using the function value to obtain corrected depth information.
6. The method of claim 2, wherein the error correction means corresponding to the line offset information comprises:
estimating the height error and the transverse error of the end effector to the target to be operated;
performing preliminary correction on the line offset information according to the height error and the transverse error to obtain preliminarily corrected line offset information; the primarily corrected line offset information comprises a primarily corrected height distance and a primarily corrected transverse distance;
acquiring the offset direction of the height distance and the transverse distance, which is represented by the line offset information after the preliminary correction;
acquiring the offset distance of the height distance and the transverse distance represented by the line offset information after the preliminary correction as the current offset distance;
shifting the current offset distance to a second step length in a direction opposite to the offset direction; the second step length is a length value related to the offset distance, and the initial value of the second step length is a preset offset distance;
judging whether the offset direction changes after offsetting the second step length;
If the offset direction is not changed, the offset distance obtained after the second step length is used as a new current offset distance, and the step of shifting the current offset distance to the opposite direction of the offset direction by the second step length is returned until the offset direction is changed;
if the offset direction changes, taking the offset distance obtained after the second step length is offset as a new current offset distance, reducing the second step length in an equal proportion to obtain a new second step length, and returning to the step of shifting the current offset distance to the opposite direction of the offset direction by the second step length until the second step length is reduced to be smaller than the distance execution precision; the distance execution precision represents the precision of the line offset information in a reasonable error range;
and determining line offset information of the offset distance obtained by the last offset to obtain corrected line offset information.
7. The method of claim 6, wherein the predicting the height error and lateral error of the end effector to the target to be operated on comprises:
after the position of the multi-axis robot is guided according to the corrected depth information, the image data of the target to be operated is acquired again by utilizing the image acquisition equipment, and auxiliary image data are obtained;
Performing data analysis on the auxiliary image data to obtain depth information;
calculating the scaling ratio of the obtained depth information and the corrected depth information to obtain size information;
and estimating the height error and the transverse error of the end effector to the target to be operated by using the size information and the line offset information.
8. A robot guiding device, characterized by being applied to a control device, which is communicatively connected to a mobile multi-axis robot; the mobile multi-axis robot includes: the system comprises a multi-axis robot and a mobile platform, wherein an end effector of the multi-axis robot is loaded with image acquisition equipment; the device comprises:
the control module is used for controlling the mobile platform to move to a preset reference position; the reference position is a position of a target to be operated, which is obtained after the target to be operated is subjected to preliminary positioning;
the acquisition module is used for acquiring the image data of the target to be operated by using the image acquisition equipment after the mobile platform reaches the reference position;
the analysis module is used for carrying out data analysis on the image data to obtain specified description information; the specified descriptive information is descriptive information for representing the relative positional relationship between the target to be operated and the multi-axis robot;
The error correction module is used for carrying out error correction on the specified description information;
and the guiding module is used for guiding the position of the multi-axis robot according to the corrected specified description information so as to enable the end effector of the multi-axis robot after being guided to reach the operating area aiming at the target to be operated.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-7 when executing a program stored on a memory.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-7.
CN202211714189.1A 2022-12-27 2022-12-27 Robot guiding method and device, electronic equipment and storage medium Pending CN116038698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211714189.1A CN116038698A (en) 2022-12-27 2022-12-27 Robot guiding method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211714189.1A CN116038698A (en) 2022-12-27 2022-12-27 Robot guiding method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116038698A true CN116038698A (en) 2023-05-02

Family

ID=86130671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211714189.1A Pending CN116038698A (en) 2022-12-27 2022-12-27 Robot guiding method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116038698A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106607907A (en) * 2016-12-23 2017-05-03 西安交通大学 Mobile vision robot and measurement and control method thereof
CN110561498A (en) * 2019-09-29 2019-12-13 珠海格力智能装备有限公司 Method and device for determining repeated positioning accuracy of robot and robot
CN111438688A (en) * 2020-02-28 2020-07-24 广东拓斯达科技股份有限公司 Robot correction method, robot correction device, computer equipment and storage medium
US20220026194A1 (en) * 2020-07-24 2022-01-27 Keenon Robotics Co., Ltd. Method and apparatus for determining pose information of a robot, device and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106607907A (en) * 2016-12-23 2017-05-03 西安交通大学 Mobile vision robot and measurement and control method thereof
CN110561498A (en) * 2019-09-29 2019-12-13 珠海格力智能装备有限公司 Method and device for determining repeated positioning accuracy of robot and robot
CN111438688A (en) * 2020-02-28 2020-07-24 广东拓斯达科技股份有限公司 Robot correction method, robot correction device, computer equipment and storage medium
US20220026194A1 (en) * 2020-07-24 2022-01-27 Keenon Robotics Co., Ltd. Method and apparatus for determining pose information of a robot, device and medium

Similar Documents

Publication Publication Date Title
CN110640747B (en) Hand-eye calibration method and system for robot, electronic equipment and storage medium
CN110355755B (en) Robot hand-eye system calibration method, device, equipment and storage medium
CN106780623B (en) Rapid calibration method for robot vision system
CN112183171B (en) Method and device for building beacon map based on visual beacon
CN110561423A (en) pose transformation method, robot and storage medium
CN109366472B (en) Method and device for placing articles by robot, computer equipment and storage medium
CN113211445B (en) Robot parameter calibration method, device, equipment and storage medium
CN112330752A (en) Multi-camera combined calibration method and device, terminal equipment and readable storage medium
US20240001558A1 (en) Robot calibration method, robot and computer-readable storage medium
CN109952176A (en) A kind of robot calibration method, system, robot and storage medium
CN112686950A (en) Pose estimation method and device, terminal equipment and computer readable storage medium
US12046008B2 (en) Pose calibration method, robot and computer readable storage medium
CN115351389A (en) Automatic welding method and device, electronic device and storage medium
CN114387352A (en) External parameter calibration method, device, equipment and storage medium
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
CN111815714B (en) Fisheye camera calibration method and device, terminal equipment and storage medium
CN111971529A (en) Method and apparatus for managing robot system
CN112767479A (en) Position information detection method, device and system and computer readable storage medium
CN112631200A (en) Machine tool axis measuring method and device
CN116038698A (en) Robot guiding method and device, electronic equipment and storage medium
CN113635299B (en) Mechanical arm correction method, terminal device and storage medium
CN115272410A (en) Dynamic target tracking method, device, equipment and medium without calibration vision
CN114310869B (en) Robot hand-eye calibration method, system and terminal
CN116136388A (en) Calibration method, device, equipment and storage medium for robot tool coordinate system
Scaria et al. Cost Effective Real Time Vision Interface for Off Line Simulation of Fanuc Robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination