CN111699445B - Robot kinematics model optimization method and system and storage device - Google Patents

Robot kinematics model optimization method and system and storage device Download PDF

Info

Publication number
CN111699445B
CN111699445B CN201880088587.2A CN201880088587A CN111699445B CN 111699445 B CN111699445 B CN 111699445B CN 201880088587 A CN201880088587 A CN 201880088587A CN 111699445 B CN111699445 B CN 111699445B
Authority
CN
China
Prior art keywords
coordinate system
robot
coordinates
image
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880088587.2A
Other languages
Chinese (zh)
Other versions
CN111699445A (en
Inventor
付伟宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen A&E Intelligent Technology Institute Co Ltd
Original Assignee
Shenzhen A&E Intelligent Technology Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen A&E Intelligent Technology Institute Co Ltd filed Critical Shenzhen A&E Intelligent Technology Institute Co Ltd
Publication of CN111699445A publication Critical patent/CN111699445A/en
Application granted granted Critical
Publication of CN111699445B publication Critical patent/CN111699445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a robot kinematics model optimization method, a system and a storage device, wherein the method comprises the following steps: acquiring an image of a tip part of a robot by an image pickup apparatus; extracting feature points of the tip part based on the image; carrying out three-dimensional reconstruction on the feature points to obtain coordinates of the feature points in an image coordinate system, and obtaining the coordinates of the feature points in a world coordinate system according to the conversion relation between the image coordinate system and the world coordinate system; obtaining the actual pose of the tail end part of the robot according to the coordinates of the characteristic points in a world coordinate system; and optimizing the kinematic model of the robot according to the actual pose of the end part. The image of the tail end part of the robot is obtained by the camera equipment, the characteristic points in the image are subjected to three-dimensional reconstruction, so that the actual pose of the tail end part can be conveniently obtained, and the motion model of the robot is optimized by utilizing the actual pose to correct the motion parameters.

Description

Robot kinematics model optimization method and system and storage device
Technical Field
The present application relates to the field of robot control technologies, and in particular, to a method, a system, and a storage device for optimizing a kinematic model of a robot.
Background
With the development of science and technology, robots are increasingly used in daily life and industrial production activities. In order to achieve high control precision, an accurate motion model of the robot needs to be obtained, and accurate robot posture data is needed for constructing the accurate motion model. In the prior art, a laser tracker is generally used for measuring actual pose data of a robot, then parameters of an actual kinematic model of the robot are calculated by using a related mathematical algorithm, and finally the obtained parameters are iterated into a robot control system to compensate the parameters of a theoretical kinematic model, so that the kinematic model of the robot is optimized.
The inventor of the application finds that the laser tracker system has a complex structure and is inconvenient to install and is not beneficial to the application of the robot in the industrial environment in the process of researching the prior art.
Disclosure of Invention
The application provides a robot kinematics model optimization method, a robot kinematics model optimization system and a storage device, which are used for conveniently optimizing a kinematics model of a robot and improving the positioning precision in the robot control process.
In order to solve the above technical problem, a technical solution adopted by the present application is to provide a robot kinematics model optimization method, including: acquiring an image of a tip part of a robot by an image pickup apparatus; extracting feature points of the tip part based on the image; carrying out three-dimensional reconstruction on the feature points to obtain coordinates of the feature points in an image coordinate system, and obtaining the coordinates of the feature points in a world coordinate system according to the conversion relation between the image coordinate system and the world coordinate system; obtaining the actual pose of the tail end part of the robot according to the coordinates of the feature points in a world coordinate system; and optimizing a kinematic model of the robot according to the actual pose of the tip component.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a robot kinematics model optimization system, including a processor, a memory and a camera device, where the memory stores program instructions, and the processor can load the program instructions and execute a robot kinematics model optimization method, where the method includes: acquiring an image of a tip part of a robot by an image pickup apparatus; extracting feature points of the tip part based on the image; performing three-dimensional reconstruction on the feature points to obtain coordinates of the feature points in a world coordinate system; obtaining an actual pose of the tail end part of the robot according to the coordinates of the feature points in a world coordinate system; and optimizing a kinematic model of the robot according to the actual pose of the tip component.
In order to solve the above technical problem, another technical solution provided by the present application is to provide a storage device for storing program instructions, which can be loaded and execute the robot kinematics model compensation method.
The beneficial effect of this application is: the image of the tail end part of the robot is obtained by the camera equipment, the characteristic points in the image are subjected to three-dimensional reconstruction, so that the actual pose of the tail end part of the robot can be conveniently obtained, and the motion model of the robot is optimized by utilizing the actual pose to correct the motion parameters. Therefore, the robot kinematic model can be optimized conveniently, and the positioning precision of the robot is improved.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of a robot kinematics model optimization method according to the present application.
Fig. 2 is a schematic flow chart of another embodiment of the robot kinematics model optimization method according to the present application.
Fig. 3 is a schematic flowchart of an embodiment of the method for acquiring the actual pose of the end part of the robot according to the present application.
Fig. 4 is a schematic structural diagram of an embodiment of the robot kinematics model optimization system according to the present application.
Fig. 5 is a schematic structural diagram of an embodiment of the robot end part actual pose acquisition system according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Use of ordinal terms such as "first," "second," etc., in the claims and the specification to modify an element does not by itself connote any priority or order of one element over another, but are used merely as labels to distinguish two elements having the same name.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for optimizing a kinematic model of a robot according to an embodiment of the present disclosure. The method comprises the following steps:
s101: an image of a tip part of the robot is acquired by an imaging apparatus.
For robots, in particular industrial robots, various operations, such as machining, clamping, feeding, etc., are performed mainly by end parts, including tools, grippers, feeding devices, etc. The quality of the control of the robot movement is mainly reflected in the accuracy of the positioning of the robot end parts. The robot kinematic model is used for calculating the motion of each joint and each part of the robot, wherein the motion of the tail end part of the robot is included. The more accurate the robot kinematics model, the more accurate the calculated movement of the end part. For the subsequent correction of the robot kinematic model, first in step S101 an image of the tip part of the robot is acquired by the imaging device.
Alternatively, the image pickup apparatuses may be plural, and acquire images of the tip parts of the robot, respectively.
Alternatively, the image pickup device may be an infrared sensor, a CCD camera, or a CMOS camera.
S102: feature points of the tip part are extracted based on the image.
The feature points are points that can be used to describe the position and attitude of the tip part. It should be possible to distinguish the feature points from other parts of the image in its surroundings, for example, the feature points may have different colors or different materials. Alternatively, the characteristic point may be a point that represents a geometric characteristic of the outer shape of the tip member, such as an apex of the conical member, a point on the circumference of the circular member, a center of a circle, or the like. In step S102, the feature points are identified and extracted from the image of the tip part by an image recognition technique, and the positions thereof in the image are acquired. It is to be understood that the feature point may be plural in order to describe the position and posture of the tip part.
S103: and performing three-dimensional reconstruction on the characteristic points of the terminal part to obtain the coordinates of the characteristic points in the image coordinate system, and obtaining the coordinates of the characteristic points of the terminal part in the world coordinate system according to the conversion relation between the image coordinate system and the world coordinate system.
After the position of the feature point of the end part in the image of the end part, that is, the coordinates of the feature point in the image coordinate system, is obtained, the feature point can be three-dimensionally reconstructed to obtain the coordinates of the feature point in the image coordinate system, and the coordinates of the feature point in the world coordinate system are obtained according to the conversion relationship between the image coordinate system and the world coordinate system. For example, the coordinates of the feature point of the tip part in the world coordinate system may be calculated from the image coordinate system, the conversion relationship of the imaging apparatus coordinate system and the world coordinate system, and the coordinates of the feature point in the image coordinate system.
S104: and obtaining the actual pose of the tail end part of the robot according to the coordinates of the characteristic points in a world coordinate system.
The actual pose of the end part comprises the actual position and the actual posture of the end part, and the position coordinates and the direction vector of the end part in the world coordinate system can be determined according to the calculated coordinates of the characteristic points in the world coordinate system. For example, the actual pose of the tip part may be determined by using one of the feature points as an origin and using the coordinates of the origin to represent the tip part position, while describing the pose of the tip part by the positional relationship of the other feature points to the origin.
Optionally, in some embodiments, the kinematic model of the robot is calculated based on the robot base coordinate system, and at this time, the coordinates of the feature point in the world coordinate system may be further converted into the coordinates of the feature point in the robot base coordinate system through a conversion relationship between the robot base coordinate system and the world coordinate system, so as to determine the actual pose of the end component in the robot base coordinate system, that is, the position coordinates and the direction vector in the robot base coordinate system. Similarly, the obtained actual pose of the end part of the robot can also be described based on other coordinate systems, if necessary, and is not limited herein.
S105: and optimizing the kinematic model of the robot according to the actual pose of the tail end part.
As mentioned above, the robot kinematics model is used to calculate the motion of each part of the robot, and the specific parameters are related to the size, assembly structure and motion mode of each part of the robot. From the kinematic model of the robot, the nominal pose of the end-pieces can be calculated, and due to the presence of errors, there will be a difference between the nominal and actual pose of the end-pieces. Therefore, after the actual pose of the end part is obtained, the actual motion parameters (position, direction, speed, angular velocity, acceleration, etc.) of the robot can be calculated by using a relevant mathematical algorithm and iterated into the kinematic model of the robot, so that the parameters of the kinematic model of the robot are compensated, and the kinematic model of the robot is optimized.
The image of the tail end part of the robot is obtained by the camera equipment, the characteristic points in the image are reconstructed in a three-dimensional mode, the actual pose of the tail end part of the robot can be conveniently obtained, the actual pose is used for optimizing a motion model of the robot, and motion parameters in the motion model are corrected. Therefore, the robot kinematic model can be optimized conveniently, and the positioning precision of the robot is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a method for optimizing a kinematic model of a robot according to another embodiment of the present application. The method comprises the following steps:
s201: and constructing a kinematic model of the robot according to the structure of the robot.
The method for constructing the robot kinematic model can be determined according to the needs and should not be taken as a limitation of the present application. For example, in some embodiments, the kinematic model of the robot may be represented using D-H parameters, with the transformation matrix a between adjacent joints as:
Figure BDA0002617876570000051
wherein alpha is i Is the length of the connecting rod, d i Is link offset, θ i The index i represents a coordinate system corresponding to the ith joint of the robot. To be provided with
0 represents a base coordinate system of the robot, 1 to n-1 represent coordinate systems of respective joints of the robot, and n is a tableIndicating the coordinate system of the end part of the robot, the homogeneous transformation matrix from the base coordinate system of the robot to the coordinate system of the end part
Figure BDA0002617876570000061
Can be expressed as:
Figure BDA0002617876570000062
with the known robot structure, the parameters in the model can be obtained, so as to construct a kinematic model of the robot.
S202: the nominal pose of the end part of the robot is calculated.
At any moment, from the kinematic model of the robot, nominal poses of the robot end parts, including nominal positions and nominal poses of the robot end parts, can be calculated.
S203: and (3) constructing a three-dimensional visual system by using the camera equipment to obtain a conversion relation between an image coordinate system of the three-dimensional visual system and a world coordinate system.
Two camera devices can be used to construct a three-dimensional vision system, i.e., a binocular vision positioning system. For a feature point on an object, two camera devices fixed at different positions are used for acquiring images of the object, and coordinates of the feature point on image planes (two dimensions) of the two camera devices, namely coordinates of the feature point on an image coordinate system of a three-dimensional vision system, are respectively acquired. The coordinates of the feature point in the coordinate system of the image pickup apparatus of one of the image pickup apparatuses can be obtained geometrically as long as the precise relative positions of the two image pickup apparatuses are known. The origin of the camera coordinate system may be a point at any known location on the camera (e.g., the base or image sensor of the camera), or may be a point at any known location outside the camera. Unlike the image coordinate system of the three-dimensional vision system, the image pickup apparatus coordinate system is a three-dimensional coordinate system. Further, by using the conversion relationship between the imaging device coordinate system and the world coordinate system of the imaging device, the conversion relationship between the image coordinate system of the three-dimensional visual system and the world coordinate system can be obtained.
For ease of understanding, the above process is described below as an example. Putting two cameras with the same focal length together, and respectively establishing a left camera coordinate system O by taking the geometric center of each lens as an origin l x l y l And the right camera coordinate system O 2 x 2 y 2 And the x-axis and the y-axis of the two camera coordinate systems are parallel to each other. Wherein, each camera coordinate system plane is vertical to the direction of the optical axis, and the x axis is the direction of the datum line. The reference line is a line segment connecting geometric centers of the two lenses, where the reference line is made perpendicular to the optical axis, and the length of the reference line is known as b from the camera position. Then, a coordinate system O of the camera shooting equipment is established by taking the midpoint of the datum line as the origin 3 x 3 y 3 z 3 (which may be considered as the camera coordinate system of either camera) the x-axis and y-axis of the camera coordinate system are parallel to the x-axis and y-axis of the two-camera coordinate system.
Any point A (x) in the camera coordinate system 3 ,y 3 ,z 3 ) After the left and right cameras respectively image, the coordinates in the camera coordinate system are respectively A1 (x) 1 ,y 1 ) And A2 (x) 2 ,y 2 ). The two imaging points are on the same imaging plane, and are therefore plane coordinates, such as
Figure BDA0002617876570000071
Wherein, the focal length f is the distance from the geometric center of the lens to the imaging plane. According to the above equation system, three unknowns x3, y3 and z3 can be obtained as:
Figure BDA0002617876570000072
by this, the conversion of the plane coordinates of the three-dimensional vision system into the coordinate system of the image pickup apparatus is completed. Then, the coordinates (x 4, y4, z 4) of the point a in the world coordinate system, for example, can be obtained from the conversion relationship between the imaging apparatus coordinate system and the world coordinate system
Figure BDA0002617876570000073
Wherein R is 11 ~R 33 For the rotation-related parameters of the coordinate system, determined by the angular relationship between the coordinate system of the image-capturing device and the coordinate axes of the world coordinate system, T X 、T Y 、T Z The relative position between the coordinate system of the camera and the origin of the world coordinate system is determined as the relevant parameter of the coordinate system translation.
Therefore, by constructing the three-dimensional vision system using two imaging apparatuses, it is possible to acquire coordinates of the feature point of the tip part of the robot in the world coordinate system in the subsequent step. It is understood that step S203 may be performed before step S201 and step S202, after step S201 and step S202, or simultaneously with step S201 and step S202.
S204: an image of the tip part is acquired with an image pickup apparatus.
S205: and acquiring the coordinates of the characteristic points in the image coordinate system.
At least three characteristic points are marked on the end part in advance and are not located on the same straight line.
It will be appreciated that the feature points may be used to describe the position and attitude of the end member, and therefore, at least three feature points that are not on the same line may be selected. Alternatively, to facilitate the description of the position and attitude of the end member, the feature point may be selected as a feature point related to the geometric shape of the end member, for example, the center of a circle of a disk. The method of marking the feature points may be determined as needed, for example, the feature points are marked by using a specific shape and/or a specific color, and the marking method of each feature point may be the same or different.
In some embodiments, the at least three feature points include a first feature point located at an origin of a component coordinate system of the end component, a second feature point located on a first coordinate axis of the component coordinate system, and a third feature point located on a second coordinate axis of the component coordinate system. The origin, the first coordinate axis and the second coordinate axis of the component coordinate system of the end component may be predefined, and for example, the origin may be any point on the end component or a point outside the end component. The first coordinate axis and the second coordinate axis pass through the origin and are perpendicular to each other, and the specific directions of the first coordinate axis and the second coordinate axis can be set as required.
In some embodiments, the end member is a flange of the robot for connection to other tools. The component coordinate system is a flange coordinate system of the flange plate of the robot, and the origin, the first coordinate axis and the second coordinate axis of the component coordinate system are located on the end face of the flange plate. For example, the flange center is used as an origin, and the first axis extends from the origin in an arbitrary direction of the flange end surface, and the second axis extends from the origin in a direction perpendicular to the first axis.
In steps S204 and S205, by acquiring the image of the end part with the image pickup apparatus, the images of the respective feature points can be acquired at the same time, the respective feature points therein can be extracted by recognizing the images, and the coordinates of the feature points in the image coordinate system of the three-dimensional vision system can be acquired separately with the three-dimensional vision system set up in step S203.
S206: and calculating the coordinates of the characteristic points in the world coordinate system according to the conversion relation between the image coordinate system and the world coordinate system.
Further, the coordinates of the feature point in the world coordinate system can be calculated from the conversion relationship between the three-dimensional visual system image coordinate system and the world coordinate system obtained in step S203.
S207: the actual position of the end part is determined from the coordinates of the first characteristic point in the world coordinate system.
When three feature points are employed, the actual position of the end part may be represented by coordinates of a first feature point (the first feature point may be any one of the three feature points, and the other two feature points except the first feature point may be referred to as a second feature point and a third feature point, respectively) in the world coordinate system, or coordinates of an origin of the part coordinate system of the end part in the world coordinate system may be determined to represent the actual position of the end part, based on the coordinates of the first feature point in the world coordinate system and a relationship between the first feature point and the origin of the part coordinate system of the end part.
S208: and determining the actual posture of the end part according to the relative positions of the first characteristic point and the second characteristic point and the relative position relation of the first characteristic point and the third characteristic point.
Since the first feature point, the second feature point, and the third feature point are not on the same straight line, the first feature point, the second feature point, and the third feature point may define one plane of the end part of the robot. The actual attitude of the plane can be known from the positional relationship between them.
For example, when the first, second, and third feature points are represented by O, a, and B, respectively, and the O point is taken as the origin of the component coordinate system of the end component, the coordinate x of the O point in the world coordinate system is the coordinate x of the O point o 、y o And z o The position of the end part can be indicated. The vectors OA and OB may determine a plane, from which a normal vector of the plane in the world coordinate system may be calculated to represent the actual pose of the plane, wherein the normal vector is perpendicular to the vectors OA and OB. Thus, the actual position and the actual posture of the tip member in the world coordinate system are obtained. It will be appreciated that the actual attitude of the tip component may also be represented using other parameters, such as yaw, pitch and roll angles, etc. In addition, if necessary, an expression of the actual position and the actual attitude of the lower end part of the base coordinate system of the robot can be obtained from the conversion relationship between the world coordinate system and the base coordinate system of the robot.
In some embodiments, from the coordinates of the first feature point in the base coordinate system of the robot, the coordinates of the origin of the part coordinate system in the base coordinate system may be obtained. For example, since the first feature point O is taken as the origin of the component coordinate system of the end component, the coordinates of the point O in the base coordinate system of the robot can indicate the coordinates of the origin of the component coordinate system in the base coordinate system. In addition, according to the coordinate vector of the connecting line of the first characteristic point and the second characteristic point in the base coordinate system, the coordinate vector of the first coordinate axis of the component coordinate system in the base coordinate system can be obtained. And obtaining a coordinate vector of a second coordinate axis of the component coordinate system in the base coordinate system according to the coordinate vector of the connecting line of the first characteristic point and the third characteristic point in the base coordinate system. And further, according to the coordinate vectors of the first coordinate axis and the second coordinate axis in the base coordinate system, the actual pose of the component coordinate system can be obtained. For example, if the second and third feature points a and B are points on the first and second coordinate axes of the component coordinate system of the end component, respectively, the coordinate vector of the first coordinate axis of the component coordinate system in the world coordinate system can be obtained from the coordinate vector of OA in the base coordinate system of the robot, and the coordinate vector of the second coordinate axis of the component coordinate system in the world coordinate system can be obtained from the coordinate vector of OB in the base coordinate system of the robot. Therefore, the actual pose of the component coordinate system can be obtained according to the coordinate vectors of the component coordinate system of the tail end component in the first coordinate system and the second coordinate system in the base coordinate system.
S209: and optimizing the kinematic model of the robot according to the actual pose of the tail end part.
After the actual pose of the end part is obtained, the actual motion parameters (position, attitude, velocity, angular velocity, acceleration, etc.) of the end part of the robot can be calculated by using a relevant mathematical algorithm, such as a particle swarm optimization method, a genetic algorithm, etc., and iterated into the kinematic model of the robot, so that the parameters of the kinematic model of the robot are compensated, the kinematic model of the robot is optimized, and the nominal position and the nominal pose of the end part of the robot are updated. Therefore, the optimized kinematic model of the robot is closer to the actual situation, and the control precision of the robot is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an embodiment of a method for acquiring an actual pose of a robot end component according to the present application. The method comprises the following steps:
s301: an image of a tip part of the robot is acquired by an image pickup apparatus.
S302: feature points of the tip part are extracted based on the image.
S303: and performing three-dimensional reconstruction on the characteristic points to obtain the coordinates of the characteristic points in a world coordinate system.
S304: and obtaining the actual pose of the tail end part of the robot according to the coordinates of the characteristic points in a world coordinate system.
Steps S301 to S304 are similar to steps S101 to S104 in the foregoing embodiment, and the details thereof are not repeated herein. The embodiment can conveniently obtain the actual pose of the end part of the robot by using the camera to obtain the image of the end part of the robot and performing three-dimensional reconstruction on the feature points in the image, and the actual pose can be applied to various aspects of robot control and robot motion calculation, including but not limited to the optimization of the robot kinematic model.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a robot kinematics model optimization system 400 according to an embodiment of the present application. The robot control system 400 includes a communication bus 401, a processor 402, a memory 403, and an image pickup apparatus 404. The processor 402, memory 403, and camera device 404 are coupled by a communication bus 401.
Wherein the memory 403 holds program data that can be loaded by the processor 402 and executed to perform the robot kinematics model optimization method of any of the embodiments described above. It is understood that in other embodiments, the memory 403 and/or the camera 404 may be disposed in the same physical device with different processors 402, and the method of any of the above embodiments may be performed by combining the robot kinematics model optimization system 400 with a network and an external device.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a system 500 for acquiring an actual pose of a robot end part according to the present invention. The robot control system 500 includes a communication bus 501, a processor 502, a memory 503, and an image pickup apparatus 504. The processor 502, memory 503, and camera device 504 are coupled by a communication bus 501.
Wherein the memory 503 stores program data, which can be loaded by the processor 502 and executed by the robot end part actual pose acquisition method according to any of the above embodiments. It is understood that in other embodiments, the memory 503 and/or the camera 504 may be disposed in different physical devices from the processor 502, and the method of any of the above embodiments may be performed by combining the robot kinematics model optimization system 500 with a network and an external device.
The functions described in the above embodiments, if implemented in software and sold or used as a separate product, may be stored in a device having a storage function, that is, the present application also provides a storage device storing a program. Program data in a storage device including, but not limited to, a usb disk, an optical disk, a server, a hard disk, or the like can be executed to implement the robot kinematics model optimization method or the actual pose acquisition method of the robot in the above-described embodiments.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (15)

1. A method for optimizing a kinematic model of a robot, comprising:
marking at least three characteristic points on the end part, wherein the at least three characteristic points are not positioned on the same straight line;
acquiring an image of a tip part of a robot by an image pickup apparatus;
extracting feature points of the tip part based on the image;
carrying out three-dimensional reconstruction on the feature points to obtain coordinates of the feature points in an image coordinate system, and obtaining the coordinates of the feature points in a world coordinate system according to the conversion relation between the image coordinate system and the world coordinate system;
obtaining the actual pose of the tail end part of the robot according to the coordinates of the feature points in a world coordinate system;
wherein the deriving an actual pose of the tip part of the robot from coordinates of the feature points in a world coordinate system comprises:
determining an actual position of the tip part from coordinates of the first feature point in the world coordinate system;
determining the actual posture of the end part according to the relative position relationship between the first characteristic point and the second characteristic point and the relative position relationship between the first characteristic point and the third characteristic point;
optimizing a kinematic model of the robot according to the actual pose of the tip component.
2. The method of claim 1, further comprising, prior to reconstructing the feature points in three dimensions:
and constructing a three-dimensional visual system by using at least two camera devices to obtain a conversion relation between an image coordinate system and a world coordinate system of the three-dimensional visual system.
3. The method of claim 2, wherein obtaining the transformation relationship between the image coordinate system and the world coordinate system of the three-dimensional vision system comprises:
obtaining a conversion relation between an image coordinate system of the three-dimensional vision system and a camera equipment coordinate system of one of the two camera equipment according to the relative positions of the two camera equipment;
and obtaining the conversion relation between the image coordinate system of the three-dimensional vision system and the world coordinate system according to the conversion relation between the camera equipment coordinate system of one of the two camera equipment and the world coordinate system.
4. The method of claim 1, wherein: the at least three feature points include a first feature point located at an origin of a component coordinate system of the tip component, a second feature point located on a first coordinate axis of the component coordinate system, and a third feature point located on a second coordinate axis of the component coordinate system.
5. The method of claim 4, wherein: the obtaining the actual pose of the end part of the robot from the coordinates of the feature points in the world coordinate system includes:
transforming the coordinates of the characteristic points under the world coordinate system into the coordinates under the base coordinate system of the robot according to the transformation relation between the base coordinate system of the robot and the world coordinate system;
and calculating the actual pose of the tail end part according to the coordinates of the characteristic points under the base coordinate system of the robot.
6. The method of claim 5, wherein: the calculating the actual pose of the tip part from the coordinates of the feature points under the base coordinate system of the robot includes:
obtaining the coordinate of the origin of the component coordinate system in the base coordinate system according to the coordinate of the first feature point in the base coordinate system;
obtaining a coordinate vector of a first coordinate axis of the component coordinate system in the base coordinate system according to the coordinate vector of the connecting line of the first characteristic point and the second characteristic point in the base coordinate system;
obtaining a coordinate vector of a second coordinate axis of the component coordinate system in the base coordinate system according to the coordinate vector of the connecting line of the first feature point and the third feature point in the base coordinate system;
and obtaining the actual pose of the component coordinate system according to the coordinate vectors of the first coordinate axis and the second coordinate axis in the base coordinate system.
7. A robot tip part actual pose acquisition method is characterized by comprising the following steps:
marking at least three characteristic points on the end part, wherein the at least three characteristic points are not positioned on the same straight line; acquiring an image of a tip part of a robot by an image pickup apparatus;
extracting feature points of the tip part based on the image;
performing three-dimensional reconstruction on the feature points to obtain coordinates of the feature points in a world coordinate system;
obtaining an actual pose of the tail end part of the robot according to the coordinates of the feature points in a world coordinate system;
the obtaining the actual pose of the end part of the robot from the coordinates of the feature points in the world coordinate system includes:
determining the actual position of the end part according to the coordinates of the first characteristic point in the world coordinate system;
and determining the actual posture of the end part according to the relative position relation between the first characteristic point and the second characteristic point and the relative position relation between the first characteristic point and the third characteristic point.
8. A robot kinematics model optimization system comprising a processor, a memory and a camera device, the memory storing program instructions, the processor being loadable with the program instructions and executing a robot kinematics model optimization method comprising:
marking at least three characteristic points on the end part, wherein the at least three characteristic points are not positioned on the same straight line;
acquiring an image of a tip part of a robot by the image pickup apparatus;
extracting feature points of the tip part based on the image;
carrying out three-dimensional reconstruction on the feature points to obtain coordinates of the feature points in an image coordinate system, and obtaining the coordinates of the feature points in a world coordinate system according to the conversion relation between the image coordinate system and the world coordinate system;
obtaining an actual pose of the tail end part of the robot according to the coordinates of the feature points in a world coordinate system; the obtaining the actual pose of the end part of the robot from the coordinates of the feature points in the world coordinate system includes:
determining the actual position of the end part according to the coordinates of the first characteristic point in the world coordinate system;
determining the actual posture of the tail end part according to the relative position relationship between the first characteristic point and the second characteristic point and the relative position relationship between the first characteristic point and the third characteristic point;
optimizing a kinematic model of the robot according to the actual pose of the tip component.
9. The system of claim 8, further comprising, prior to reconstructing the feature points in three dimensions:
and constructing a three-dimensional visual system by using at least two camera devices to obtain a conversion relation between an image coordinate system and a world coordinate system of the three-dimensional visual system.
10. The system of claim 9, wherein said deriving a transformation relationship between an image coordinate system and a world coordinate system of said three-dimensional vision system comprises:
the obtaining of the conversion relation between the image coordinate system and the world coordinate system of the three-dimensional vision system comprises:
obtaining a conversion relation between an image coordinate system of the three-dimensional vision system and a camera equipment coordinate system of one of the two camera equipment according to the relative positions of the two camera equipment;
and obtaining the conversion relation between the image coordinate system of the three-dimensional vision system and the world coordinate system according to the conversion relation between the camera equipment coordinate system of one of the two camera equipment and the world coordinate system.
11. The system of claim 8, wherein: the at least three feature points include a first feature point located at an origin of a component coordinate system of the tip component, a second feature point located on a first coordinate axis of the component coordinate system, and a third feature point located on a second coordinate axis of the component coordinate system.
12. The system of claim 11, wherein: the obtaining of the actual pose of the end part of the robot from the coordinates of the feature points in the world coordinate system includes:
transforming the coordinates of the characteristic points under the world coordinate system into the coordinates under the base coordinate system of the robot according to the transformation relation between the base coordinate system of the robot and the world coordinate system;
and calculating the actual pose of the tail end part according to the coordinates of the characteristic points under the base coordinate system of the robot.
13. The system of claim 12, wherein: the calculating of the actual pose of the tip part from the coordinates of the feature points under the base coordinate system of the robot includes:
obtaining the coordinate of the origin of the component coordinate system in the base coordinate system according to the coordinate of the first feature point in the base coordinate system;
obtaining a coordinate vector of a first coordinate axis of the component coordinate system in the base coordinate system according to a coordinate vector of a connecting line of the first characteristic point and the second characteristic point in the base coordinate system;
obtaining a coordinate vector of a second coordinate axis of the component coordinate system in the base coordinate system according to the coordinate vector of the connecting line of the first feature point and the third feature point in the base coordinate system;
and obtaining the actual pose of the component coordinate system according to the coordinate vectors of the first coordinate axis and the second coordinate axis in the base coordinate system.
14. A robot end-part actual pose acquisition system comprising a processor, a memory, and an image capture device, the memory storing program instructions, the processor being loadable with the program instructions and executing a robot end-part actual pose acquisition method, the method comprising:
marking at least three characteristic points on the end part, wherein the at least three characteristic points are not positioned on the same straight line;
acquiring an image of a tip part of a robot by an image pickup apparatus;
extracting feature points of the tip part based on the image;
performing three-dimensional reconstruction on the feature points to obtain coordinates of the feature points in a world coordinate system;
obtaining the actual pose of the tail end part of the robot according to the coordinates of the feature points in a world coordinate system;
the obtaining of the actual pose of the end part of the robot from the coordinates of the feature points in the world coordinate system includes:
determining the actual position of the end part according to the coordinates of the first characteristic point in the world coordinate system;
determining the actual posture of the tail end part according to the relative position relationship between the first characteristic point and the second characteristic point and the relative position relationship between the first characteristic point and the third characteristic point; .
15. An apparatus having a storage function, characterized in that: for storing program instructions which can be loaded and which execute the method according to any of claims 1-7.
CN201880088587.2A 2018-07-13 2018-07-13 Robot kinematics model optimization method and system and storage device Active CN111699445B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/095675 WO2020010625A1 (en) 2018-07-13 2018-07-13 Method and system for optimizing kinematic model of robot, and storage device.

Publications (2)

Publication Number Publication Date
CN111699445A CN111699445A (en) 2020-09-22
CN111699445B true CN111699445B (en) 2022-10-11

Family

ID=69143185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880088587.2A Active CN111699445B (en) 2018-07-13 2018-07-13 Robot kinematics model optimization method and system and storage device

Country Status (2)

Country Link
CN (1) CN111699445B (en)
WO (1) WO2020010625A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835773A (en) * 2021-08-10 2021-12-24 深兰科技(上海)有限公司 Parameter configuration method and device of motion module, electronic equipment and storage medium
CN115060229A (en) * 2021-09-30 2022-09-16 西安荣耀终端有限公司 Method and device for measuring a moving object

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001038662A (en) * 1999-08-04 2001-02-13 Honda Motor Co Ltd Working robot calibrating method
CN102818524A (en) * 2012-07-31 2012-12-12 华南理工大学 On-line robot parameter calibration method based on visual measurement
CN103322953B (en) * 2013-05-22 2015-11-04 北京配天技术有限公司 The scaling method of workpiece coordinate system, device and work pieces process disposal route, device
CN104858870A (en) * 2015-05-15 2015-08-26 江南大学 Industrial robot measurement method based on tail end numbered tool
CN106406277B (en) * 2016-09-23 2019-01-25 贵州珞石三盛科技有限公司 Robot kinematics' parameter error Optimization Compensation method and device
CN108242064B (en) * 2016-12-27 2020-06-02 合肥美亚光电技术股份有限公司 Three-dimensional reconstruction method and system based on area array structured light system
CN107121967A (en) * 2017-05-25 2017-09-01 西安知象光电科技有限公司 A kind of laser is in machine centering and inter process measurement apparatus
CN107274481A (en) * 2017-06-07 2017-10-20 苏州大学 A kind of method for reconstructing three-dimensional model based on multistation website point cloud
CN108038902B (en) * 2017-12-07 2021-08-27 合肥工业大学 High-precision three-dimensional reconstruction method and system for depth camera

Also Published As

Publication number Publication date
CN111699445A (en) 2020-09-22
WO2020010625A1 (en) 2020-01-16

Similar Documents

Publication Publication Date Title
JP7237483B2 (en) Robot system control method, control program, recording medium, control device, robot system, article manufacturing method
EP1555508B1 (en) Measuring system
JP6222898B2 (en) Three-dimensional measuring device and robot device
CN109029257A (en) Based on stereoscopic vision and the large-scale workpiece pose measurement system of structure light vision, method
JP6324025B2 (en) Information processing apparatus and information processing method
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN111801198A (en) Hand-eye calibration method, system and computer storage medium
JP2010172986A (en) Robot vision system and automatic calibration method
CN109940626B (en) Control method of eyebrow drawing robot system based on robot vision
CN107300382B (en) Monocular vision positioning method for underwater robot
CN110722558B (en) Origin correction method and device for robot, controller and storage medium
JP4132068B2 (en) Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus
CN111360821A (en) Picking control method, device and equipment and computer scale storage medium
CN111699445B (en) Robot kinematics model optimization method and system and storage device
JP7427370B2 (en) Imaging device, image processing device, image processing method, calibration method for imaging device, robot device, method for manufacturing articles using robot device, control program, and recording medium
CN111145267B (en) 360-degree panoramic view multi-camera calibration method based on IMU assistance
CN117340879A (en) Industrial machine ginseng number identification method and system based on graph optimization model
JP2016017913A (en) Posture information preparation system, posture information preparation method, and posture information preparation program
CN113524167A (en) Method for establishing workpiece coordinate system when robot processes workpiece and pose correction method
JP2778430B2 (en) Three-dimensional position and posture recognition method based on vision and three-dimensional position and posture recognition device based on vision
CN113405532B (en) Forward intersection measuring method and system based on structural parameters of vision system
CN115619877A (en) Method for calibrating position relation between monocular laser sensor and two-axis machine tool system
CN113255662A (en) Positioning correction method, system, equipment and storage medium based on visual imaging
CN112060083B (en) Binocular stereoscopic vision system for mechanical arm and measuring method thereof
CN112184819A (en) Robot guiding method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant