Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The robot hand-eye calibration method provided by the application can be applied to the application environment shown in fig. 1, and fig. 1 is an application environment diagram of the robot hand-eye calibration method in one embodiment. Wherein the end of the robot 10 comprises a movable tool arm 11, the camera 20 and the standard jig 30 can be mounted on the end of the robot 10 by means of the tool arm 11, a stationary feature point 40 is placed under the end of the robot 10, and the tool arm 11 is mounted on the end of the robot 10 by means of a flange 50, so that the direction of the calibration jig 30 is parallel to the normal direction of the flange 50.
In an embodiment, as shown in fig. 2, fig. 2 is a flowchart of a robot hand-eye calibration method in an embodiment, which is described by applying the method to the application environment in fig. 1, a camera and a standard jig are installed at a tail end of a robot, and a feature point is set on a fixed calibration object, the method includes the following steps:
step S210: the method comprises the steps of obtaining a first machine coordinate, a second machine coordinate and a calibration pixel coordinate, wherein the first machine coordinate is a movement control coordinate of the robot when the tail end of the calibration jig aligns to a feature point along a standard direction, the standard direction is a normal direction of a flange plate installed on the robot, the second machine coordinate is a movement control coordinate of the robot when the feature point is located at a specified position in a camera visual field, and the calibration pixel coordinate is a position coordinate of the feature point in the camera visual field when the feature point moves to the specified position in the camera visual field along with the robot.
The standard jig is indirectly installed at the tail end of the robot through the flange plate, so that the placing direction of the calibration jig is parallel to the normal direction of the flange plate. When the tail end of the calibration jig is aligned with the feature points along the standard direction, the alignment of the feature points of the calibration jig along the placing direction of the calibration jig can be ensured, and the tail end of the calibration jig is aligned with the feature points.
The movement control coordinates of the robot may be used to record the spatial position of the tool arm at the end of the robot and to control the movement of the tool arm to a specified spatial position. Therefore, the first machine coordinate can be acquired and stored when the tip of the calibration jig is aligned with the feature point in the standard direction. The second machine coordinates may be collected and stored of the movement control coordinates of the robot when the feature point is located at a specified position in the camera field of view.
The camera view in the camera at the robot end may contain the feature points and the focal length of the camera may be such that the feature points can be clearly identified. The calibration pixel coordinates may be used to identify and store position coordinates of the feature point in the camera field of view acquired by the camera when the feature point moves to a specified position in the camera field of view along with the robot.
The designated location includes a plurality, and may include upper left, upper right, left, middle, right, lower left, lower right, and lower right in the camera field of view. For example, the robot may be moved so that the feature points appear at the upper left, upper right, left, middle, right, lower left, lower right, and lower right in the camera field of view, and the second machine coordinates and the calibration pixel coordinates at the 9 designated positions may be acquired, or the second machine coordinates and the calibration pixel coordinates at any number of the designated positions may be acquired.
Step S220: and acquiring a perspective transformation matrix according to the first machine coordinate, the second machine coordinate and the calibration pixel coordinate.
From another point of view, it can be understood that when the position of the robot is at the first machine coordinates, in order to move the feature point to a specified position in the camera field of view, it is actually necessary to control the robot and move the robot to the second machine coordinates. Therefore, a perspective transformation relation exists between the second machine coordinate and the calibration pixel coordinate, and a perspective transformation matrix reflecting the perspective transformation relation can be calculated according to the first machine coordinate, the second machine coordinate and the calibration pixel coordinate.
Step S230: and acquiring the mapping relation between the movement control coordinate of the robot and the position coordinate in the camera visual field according to the perspective transformation matrix.
Step S240: and converting the pixel coordinates of the target point into target machine coordinates according to the mapping relation, wherein the pixel coordinates of the target point comprise position coordinates of the target point in the camera, and the target machine coordinates are movement control coordinates corresponding to the movement of the robot to the target point.
According to the perspective transformation matrix, the pixel coordinates of the target point can be converted into the coordinates of the target machine, so that the robot can move to the target point, and the hand-eye calibration of the robot is completed.
According to the robot hand-eye calibration method, the mapping relation between the movement control coordinate of the robot and the position coordinate in the camera visual field is obtained, the pixel coordinate of the target point can be converted into the target machine coordinate according to the mapping relation, the hand-eye calibration of the robot is completed, and the precision of the robot reaching the position where the object is grabbed can be improved.
In one embodiment, as shown in fig. 3, fig. 3 is a flowchart of target machine coordinate conversion in one embodiment, and the step of converting pixel coordinates of the target point into target machine coordinates according to the mapping relationship includes the following steps:
step S241: and acquiring initial machine coordinates and pixel coordinates of the target point, wherein the initial machine coordinates are movement control coordinates when the camera view of the target point is acquired.
And acquiring the initial machine coordinate while acquiring the pixel coordinate of the target point.
Step S242: and acquiring the coordinates of the target machine according to the initial machine coordinates, the pixel coordinates of the target point and the mapping relation.
The robot is in the position corresponding to the initial machine coordinate, and the target machine coordinate can be obtained according to the pixel coordinate and the mapping relation of the target point at the moment.
According to the robot hand-eye calibration method, the target machine coordinate is obtained according to the initial machine coordinate, the pixel coordinate of the target point and the mapping relation, the robot hand-eye calibration is completed, and the precision of the robot reaching the position where the object is grabbed can be improved.
In one embodiment, as shown in fig. 4, fig. 4 is a flowchart of the acquisition of the perspective transformation matrix in one embodiment, and the step of acquiring the perspective transformation matrix according to the first machine coordinate, the second machine coordinate and the calibration pixel coordinate includes the following steps:
step S221: and acquiring a third machine coordinate according to the first machine coordinate and the second machine coordinate, wherein the third machine coordinate is a coordinate of the second machine coordinate projected on the plane where the first machine coordinate is located.
By means of the plane where the first machine coordinate is located, the perspective transformation matrix is solved according to the obtained third machine coordinate, the complexity and the operation amount of calculation are reduced, and the precision is improved.
The first machine coordinate may include an X-axis coordinate, a Y-axis coordinate, and a rotation coordinate. And projecting the second machine coordinate on the plane where the first machine coordinate is located, namely projecting the second machine coordinate according to the angle corresponding to the rotation coordinate of the first machine coordinate, wherein the third machine coordinate obtained after the second machine coordinate is projected is the coordinate of the X axis and the Y axis under the corresponding angle.
For example, the first machine coordinate may include an X-axis coordinate, a Y-axis coordinate, and a rotation coordinate, where the rotation coordinate is 0, and then the second machine coordinate is projected behind the plane of the first machine coordinate with the rotation coordinate 0 to be understood as the second machine coordinate is projected on the plane with the rotation angle 0. Namely, the third machine coordinate obtained by projection is the coordinate with the rotation angle of 0, so that the complexity and the calculation amount of calculation can be reduced, and the precision can be improved.
Step S222: and acquiring a perspective transformation matrix according to the third machine coordinate and the calibration pixel coordinate.
And solving and acquiring a perspective transformation matrix according to the one-to-one correspondence relationship between the third machine coordinate and the calibration pixel coordinate, wherein the acquired perspective transformation matrix can be used for expressing the mapping relationship between the movement control coordinate of the robot and the position coordinate in the camera visual field.
According to the robot hand-eye calibration method, the third machine coordinate is obtained according to the first machine coordinate and the second machine coordinate, and the perspective transformation matrix is obtained according to the third machine coordinate and the calibration pixel coordinate, so that the calculation complexity and the calculation amount can be reduced, and the precision can be improved.
In one embodiment, the step of obtaining the perspective transformation matrix from the third machine coordinates and the calibration pixel coordinates comprises the steps of:
according to Qi=PiA acquisition perspective transformationMatrix, where i is the number of the designated location, QiIs the ith third machine coordinate, PiAnd A is a perspective transformation matrix for the ith calibration pixel coordinate.
According to the robot hand-eye calibration method, the perspective transformation matrix is determined and obtained according to the third machine coordinate and the calibration pixel coordinate, the complexity and the calculation amount of calculation can be reduced, and the precision is improved.
In one embodiment, the step of obtaining third machine coordinates from the first machine coordinates and the second machine coordinates comprises the steps of:
according to
Obtaining a third machine coordinate, wherein i is a serial number of the designated position, and Q
i(x
i,y
i) Is the ith third machine coordinate, x
iAnd y
iRespectively the abscissa and ordinate, vx, of the ith third machine coordinate
i、vy
iAnd vr
iRespectively the abscissa, ordinate and rotation coordinate, x, of the ith second machine coordinate
0And y
0Respectively the abscissa and ordinate of the first machine coordinate.
The robot hand-eye calibration method, x0And y0The abscissa and the ordinate of the first machine coordinate are respectively, the rotation coordinate of the first machine coordinate is 0, the third machine coordinate is calculated according to the first machine coordinate with the rotation coordinate of 0, the third machine coordinate with the same rotation coordinate of 0 can be obtained, the calculation complexity and the calculation amount can be reduced, and the precision is improved.
In one embodiment, as shown in fig. 5, fig. 5 is a flowchart of solving the perspective transformation matrix in one embodiment, and the step of obtaining the perspective transformation matrix according to the third machine coordinate and the calibration pixel coordinate includes the following steps:
step S223: and establishing a perspective transformation equation according to the third machine coordinate and the calibration pixel coordinate.
And establishing a perspective transformation equation for solving a subsequent perspective transformation matrix according to the relation between the movement control coordinate of the represented robot and the position coordinate in the camera visual field in the third machine coordinate and the calibration pixel coordinate at the specified position.
Step S224: the perspective transformation matrix is solved according to the perspective transformation equation.
According to the perspective transformation equation and the determined equation relation, the perspective transformation matrix can be accurately solved.
According to the robot eye calibration method, the perspective transformation matrix can be accurately solved through the perspective transformation equation established by the third machine coordinate and the calibration pixel coordinate, so that the precision of the robot reaching the position where the object is grabbed is improved subsequently.
In one embodiment, the step of establishing a perspective transformation equation based on the third machine coordinates and the calibration pixel coordinates comprises the steps of:
step S225: determining
The transformation relation between the coordinate of the third machine and the coordinate of the calibration pixel is shown, wherein i is the serial number of the designated position, and x
iAnd y
iRespectively the abscissa and ordinate of the ith third machine coordinate, z
iSatisfies z
i=a
13u
i+a
23w
i+a
33,u
iAnd w
iRespectively an abscissa and an ordinate of the coordinate of the ith calibration pixel,
for perspective transformation of matrix, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix.
And acquiring a transformation relation between the third machine coordinate and the calibration pixel coordinate according to the third machine coordinate and the calibration pixel coordinate at the specified position, wherein the transformation relation can be used for representing the relation between the movement control coordinate of the robot and the position coordinate in the camera visual field.
Where 1 can be regarded as a given reference, the reference can be a constant value, and the reference can be other constant values besides 1. In addition, when the reference is 1, the calculation can be convenient and simplified, and the accuracy is improved.
Step S226: obtaining from a transformation relation
And
wherein the content of the first and second substances,
and
for perspective transformation equation, i is the number of the specified location, x
iAnd y
iRespectively the abscissa and ordinate, u, of the ith third machine coordinate
iAnd w
iRespectively the abscissa and ordinate of the ith calibration pixel coordinate, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix.
From the perspective transformation equations obtained at the plurality of specified positions, the elements in the perspective transformation matrix can be accurately solved.
In addition, what is important, the included angle between the plane where the flange plate is located and the plane where the camera view field is located can be calculated through a perspective transformation matrix obtained by solving after a perspective transformation equation, namely, the error between the focal plane of the installed camera and the plane where the robot flange plate is located is calculated. Wherein, a13Can embody the included angle between the X axis and the U axis, a23Can embody the included angle between the Y axis and the W axis, a11Can embody the proportionality coefficient between the X axis and the U axis12Can embody the proportionality coefficient between X axis and W axis21Can embody the proportionality coefficient between Y axis and U axis22Can embody the proportionality coefficient between Y axis and W axis31Can embodyAmount of translation between X-axis and U-axis, a32Can show the translation amount between the Y axis and the W axis, a33a33And in special cases may be 1. The X axis is a coordinate axis of the X, the Y axis is a coordinate axis of the Y, the U axis is a coordinate axis of the U, and the W axis is a coordinate axis of the W.
According to the robot eye calibration method, the transformation relation between the third machine coordinate and the calibration pixel coordinate is determined, the perspective transformation equation is obtained, the elements in the perspective transformation matrix can be accurately solved, and the perspective transformation matrix is obtained.
In one embodiment, the step of obtaining a mapping relationship of the movement control coordinates of the robot and the position coordinates in the camera field of view from the perspective transformation matrix includes the steps of:
according to
Acquiring a mapping relation between a movement control coordinate of the robot and a position coordinate in a camera view field, wherein x and y are respectively an abscissa and an ordinate of the movement control coordinate of the robot, u and w are respectively the abscissa and the ordinate of the position coordinate in the camera view field, and a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix, x
t、y
tAnd r
tThe initial machine coordinate is a movement control coordinate when the visual field of the camera where the target point is located is acquired.
According to the robot hand-eye calibration method, after the solved perspective transformation matrix is obtained, the mapping relation between the movement control coordinate of the robot and the position coordinate in the camera visual field can be accurately obtained, so that the target machine coordinate can be obtained subsequently, the robot hand-eye calibration is completed, and the precision of the robot reaching the position where the object is grabbed is improved.
In one embodiment, the step of converting the pixel coordinates of the target point to target machine coordinates according to the mapping relationship comprises the steps of:
acquiring initial machine coordinates and pixel coordinates of a target point;
according to
Obtaining target machine coordinates, where x
fAnd y
fRespectively the abscissa and ordinate of the target machine coordinate, u
tAnd w
tRespectively the abscissa and the ordinate of the pixel coordinate of the target point, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix, x
t、y
tAnd r
tRespectively the abscissa, ordinate and rotation coordinate of the initial machine coordinate.
According to the robot hand-eye calibration method, the target machine coordinate is obtained according to the initial machine coordinate, the pixel coordinate of the target point and the mapping relation, the robot hand-eye calibration is completed, and the precision of the robot reaching the position where the object is grabbed can be improved.
And acquiring the initial machine coordinate while acquiring the pixel coordinate of the target point. The robot is in the position corresponding to the initial machine coordinate, and the target machine coordinate can be obtained according to the pixel coordinate and the mapping relation of the target point at the moment.
In one embodiment, after the step of solving the perspective transformation matrix according to the perspective transformation equation, the following steps are further included:
according to
Obtaining a perspective transformation matrix that minimizes the value, where i is the serial number of the designated location, x
iAnd y
iRespectively the abscissa and ordinate, u, of the ith third machine coordinate
iAnd w
iRespectively the abscissa and ordinate of the ith calibration pixel coordinate, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively a perspective transformation momentElements of the array.
According to the robot hand-eye calibration method, the perspective transformation matrix which enables the value to be minimum is obtained, the optimized perspective transformation matrix which is closest to the actual perspective transformation matrix can be obtained, the error is further reduced, the accuracy of the perspective transformation matrix is improved, and therefore the precision of the robot reaching the position where the object is grabbed can be improved.
In another embodiment, as shown in fig. 6, fig. 6 is a flowchart of a robot hand-eye calibration method in another embodiment, where the robot hand-eye calibration method in this embodiment includes the following steps:
and acquiring a first machine coordinate and a second machine coordinate of the robot, acquiring a calibration pixel coordinate of the feature point in the camera view, and matching the corresponding second machine coordinate and calibration pixel coordinate under 9 groups of specified positions. The camera and the standard jig are installed at the tail end of the robot, a fixed characteristic point is placed below the tail end of the robot, the camera is adjusted to enable the focal plane of the camera to be clear, the characteristic point and the plane where the characteristic point is located can be easily identified and unique, and the shape of the characteristic point can be circular, round hole-shaped and the like. And aligning the tail end of the calibration jig to the center of the characteristic point, and recording the movement control coordinate of the robot at the moment as a first machine coordinate. The robot moves to drive the camera view to change, so that the feature points can respectively appear at the upper left, upper right, left, middle, right, lower left, lower right and lower right in the camera view, calibration pixel coordinates at the 9 specified positions are obtained, and corresponding 9 second machine coordinates are obtained while the 9 calibration pixel coordinates are collected; the images can be processed and the calibration pixel coordinates of the feature points can be extracted by collecting the images of the camera at the moment. According to the first machine coordinate, the 9 second machine coordinates are normalized, and corresponding 9 third machine coordinates are obtained, where the normalization may mean that the second machine coordinates are projected on a plane with a rotation angle of 0, and the obtained third machine coordinates are coordinates with a rotation angle of 0. According to
![Figure BDA0001656355720000121](https://patentimages.storage.***apis.com/e3/e0/2b/5060b31d55dc00/BDA0001656355720000121.png)
Obtaining a third machine coordinate, wherein i is a serial number of the designated position, and Q
i(x
i,y
i) Is the ith third machine coordinate, x
iAnd y
iRespectively the abscissa and ordinate, vx, of the ith third machine coordinate
i、vy
iAnd vr
iRespectively the abscissa, ordinate and rotation coordinate, x, of the ith second machine coordinate
0And y
0Respectively the abscissa and ordinate of the first machine coordinate.
And calculating a perspective transformation matrix according to the matched second machine coordinate and the calibration pixel coordinate. After the coordinate of the third machine is obtained according to the coordinate of the second machine, a perspective transformation conversion model is established according to the coordinate of the third machine at the same designated position and the corresponding calibration pixel coordinate, so that Q is enabled to be obtained
i=P
iA, wherein i is the serial number of the designated position, Q
iIs the ith third machine coordinate, P
iAnd A is a perspective transformation matrix for the ith calibration pixel coordinate. From perspective transformation model
The transformation relation between the coordinate of the third machine and the coordinate of the calibration pixel is shown, wherein i is the serial number of the designated position, and x
iAnd y
iRespectively the abscissa and ordinate of the ith third machine coordinate, z
iSatisfies z
i=a
13u
i+a
23w
i+a
33,u
iAnd w
iRespectively an abscissa and an ordinate of the coordinate of the ith calibration pixel,
for perspective transformation of matrix, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix. Obtaining from a transformation relation
And
wherein the content of the first and second substances,
and
is a perspective transformation equation. From the perspective transformation equation and
a perspective transformation matrix is obtained that minimizes the values.
And converting the pixel coordinates of the target point of the camera view to the target machine coordinates according to the perspective transformation matrix so as to facilitate the robot to move to the target point and finish the hand-eye calibration of the robot. According to
Acquiring a mapping relation between a movement control coordinate of the robot and a position coordinate in a camera view field, wherein x and y are respectively an abscissa and an ordinate of the movement control coordinate of the robot, u and w are respectively the abscissa and the ordinate of the position coordinate in the camera view field, and x
t、y
tAnd r
tThe initial machine coordinate is a movement control coordinate when the visual field of the camera where the target point is located is acquired. According to
Obtaining target machine coordinates, where x
fAnd y
fRespectively the abscissa and ordinate of the target machine coordinate, u
tAnd w
tRespectively the abscissa and ordinate of the pixel coordinate of the target point.
According to the robot hand-eye calibration method, the second machine coordinates and the calibration pixel coordinates of the feature points at the 9 specified positions are provided, the accurate and optimal perspective transformation matrix can be calculated, the pixel coordinates of the target point can be converted into the target machine coordinates according to the mapping relation, the robot hand-eye calibration is completed, and the situation that the robot reaches the grabbed object can be improvedThe accuracy of the position of the body. Meanwhile, a perspective transformation matrix obtained by solving after a perspective transformation equation is calculated, and an included angle between a plane where the flange plate is located and a plane where the camera view is located can be calculated, namely an error between a focal plane of the installed camera and the plane where the robot flange plate is located is calculated. Wherein, a13Can embody the included angle between the X axis and the U axis, a23The angle between the Y-axis and the W-axis can be embodied.
It should be understood that although the steps in the flowcharts of fig. 2 to 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
In an embodiment, as shown in fig. 7, fig. 7 is a schematic structural diagram of a robot eye calibration system in an embodiment, and provides a robot eye calibration system, where a camera and a standard jig are installed at a distal end of a robot, feature points are arranged on a fixed calibration object, the system includes a coordinate acquisition module 310, a perspective transformation matrix acquisition module 320, and a coordinate transformation module 330, where:
the coordinate acquisition module 310 is configured to acquire a first machine coordinate, a second machine coordinate, and a calibration pixel coordinate, where the first machine coordinate is a movement control coordinate of the robot when the terminal of the calibration jig aligns with the feature point along a standard direction, the standard direction is a normal direction of a flange plate mounted on the robot, the second machine coordinate is a movement control coordinate of the robot when the feature point is located at a specified position in a camera field of view, and the calibration pixel coordinate is a position coordinate of the feature point in the camera field of view when the feature point moves to the specified position in the camera field of view along with the robot;
a perspective transformation matrix obtaining module 320, configured to obtain a perspective transformation matrix according to the first machine coordinate, the second machine coordinate, and the calibration pixel coordinate;
and the coordinate transformation module 330 is configured to obtain a mapping relationship between the movement control coordinate of the robot and the position coordinate in the camera field of view according to the perspective transformation matrix, and convert the pixel coordinate of the target point into a target machine coordinate according to the mapping relationship, where the pixel coordinate of the target point includes the position coordinate of the target point in the camera, and the target machine coordinate is the movement control coordinate corresponding to the movement of the robot to the target point.
According to the robot hand-eye calibration system, the mapping relation between the movement control coordinate of the robot and the position coordinate in the camera visual field is obtained, the pixel coordinate of the target point can be converted into the target machine coordinate according to the mapping relation, the hand-eye calibration of the robot is completed, and the precision of the robot reaching the position where the object is grabbed can be improved.
In one embodiment, the coordinate transformation module 330 is further configured to acquire initial machine coordinates and pixel coordinates of a target point, where the initial machine coordinates are movement control coordinates when acquiring a camera view in which the target point is located; and acquiring the coordinates of the target machine according to the initial machine coordinates, the pixel coordinates of the target point and the mapping relation.
According to the robot hand-eye calibration system, the target machine coordinate is obtained according to the initial machine coordinate, the pixel coordinate of the target point and the mapping relation, the robot hand-eye calibration is completed, and the precision of the robot reaching the position where the object is grabbed can be improved.
In one embodiment, the perspective transformation matrix obtaining module 320 is further configured to obtain a third machine coordinate according to the first machine coordinate and the second machine coordinate, where the third machine coordinate is a coordinate of the second machine coordinate projected on the plane where the first machine coordinate is located; and acquiring a perspective transformation matrix according to the third machine coordinate and the calibration pixel coordinate.
According to the robot hand-eye calibration system, the third machine coordinate is obtained according to the first machine coordinate and the second machine coordinate, and the perspective transformation matrix is obtained according to the third machine coordinate and the calibration pixel coordinate, so that the calculation complexity and the calculation amount can be reduced, and the precision can be improved.
In one embodiment, the perspective transformation matrix acquisition module 320 is further configured to obtain the transformation matrix according to Qi=PiA obtains perspective transformation matrix, wherein i is serial number of designated position, QiIs the ith third machine coordinate, PiAnd A is a perspective transformation matrix for the ith calibration pixel coordinate.
According to the robot hand-eye calibration system, the perspective transformation matrix is determined and obtained according to the third machine coordinate and the calibration pixel coordinate, the complexity and the calculation amount of calculation can be reduced, and the precision is improved.
In one embodiment, the perspective transformation
matrix acquisition module 320 is further configured to obtain the transformation matrix based on
Obtaining a third machine coordinate, wherein i is a serial number of the designated position, and Q
i(x
i,y
i) Is the ith third machine coordinate, x
iAnd y
iRespectively the abscissa and ordinate, vx, of the ith third machine coordinate
i、vy
iAnd vr
iRespectively the abscissa, ordinate and rotation coordinate, x, of the ith second machine coordinate
0And y
0Respectively the abscissa and ordinate of the first machine coordinate.
The robot hand-eye calibration system, x0And y0The abscissa and the ordinate of the first machine coordinate are respectively, the rotation coordinate of the first machine coordinate is 0, the third machine coordinate is calculated according to the first machine coordinate with the rotation coordinate of 0, the third machine coordinate with the same rotation coordinate of 0 can be obtained, the calculation complexity and the calculation amount can be reduced, and the precision is improved.
In one embodiment, the perspective transformation matrix obtaining module 320 is further configured to establish a perspective transformation equation according to the third machine coordinate and the calibration pixel coordinate; the perspective transformation matrix is solved according to the perspective transformation equation.
According to the robot eye calibration system, the perspective transformation matrix can be accurately solved through the perspective transformation equation established by the third machine coordinate and the calibration pixel coordinate, so that the precision of the robot reaching the position where the object is grabbed is improved subsequently.
In one embodiment, the perspective transformation
matrix acquisition module 320 is also used to determine
The transformation relation between the coordinate of the third machine and the coordinate of the calibration pixel is shown, wherein i is the serial number of the designated position, and x
iAnd y
iRespectively the abscissa and ordinate of the ith third machine coordinate, z
iSatisfies z
i=a
13u
i+a
23w
i+a
33,u
iAnd w
iRespectively an abscissa and an ordinate of the coordinate of the ith calibration pixel,
for perspective transformation of matrix, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Elements of a perspective transformation matrix are respectively; obtaining from a transformation relation
And
wherein the content of the first and second substances,
and
for perspective transformation equation, i is the number of the specified location, x
iAnd y
iRespectively the abscissa and ordinate, u, of the ith third machine coordinate
iAnd w
iRespectively the abscissa and ordinate of the ith calibration pixel coordinate, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix.
According to the robot hand-eye calibration system, the transformation relation between the third machine coordinate and the calibration pixel coordinate is determined, the perspective transformation equation is obtained, the elements in the perspective transformation matrix can be accurately solved, and the perspective transformation matrix is obtained.
In one embodiment, the coordinate
transformation module 330 is further configured to transform the image according to
Acquiring a mapping relation between a movement control coordinate of the robot and a position coordinate in a camera view field, wherein x and y are respectively an abscissa and an ordinate of the movement control coordinate of the robot, u and w are respectively the abscissa and the ordinate of the position coordinate in the camera view field, and a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix, x
t、y
tAnd r
tThe initial machine coordinate is a movement control coordinate when the visual field of the camera where the target point is located is acquired.
According to the robot hand-eye calibration system, after the solved perspective transformation matrix is obtained, the mapping relation between the movement control coordinate of the robot and the position coordinate in the camera visual field can be accurately obtained, so that the target machine coordinate can be obtained subsequently, the hand-eye calibration of the robot is completed, and the precision of the robot reaching the position where the object is grabbed is improved.
In one embodiment, the coordinate
transformation module 330 is further configured to obtain initial machine coordinates and pixel coordinates of the target point; according to
Obtaining target machine coordinates, where x
fAnd y
fRespectively the abscissa and ordinate of the target machine coordinate, u
tAnd w
tRespectively the abscissa and the ordinate of the pixel coordinate of the target point, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix, x
t、y
tAnd r
tRespectively the abscissa, ordinate and rotation coordinate of the initial machine coordinate.
According to the robot hand-eye calibration system, the target machine coordinate is obtained according to the initial machine coordinate, the pixel coordinate of the target point and the mapping relation, the robot hand-eye calibration is completed, and the precision of the robot reaching the position where the object is grabbed can be improved.
In one embodiment, the perspective transformation
matrix acquisition module 320 is further configured to obtain the transformation matrix based on
Obtaining a perspective transformation matrix that minimizes the value, where i is the serial number of the designated location, x
iAnd y
iRespectively the abscissa and ordinate, u, of the ith third machine coordinate
iAnd w
iRespectively the abscissa and ordinate of the ith calibration pixel coordinate, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix.
According to the robot hand-eye calibration system, the perspective transformation matrix which enables the value to be minimum is obtained, the optimized perspective transformation matrix which is closest to the actual perspective transformation matrix can be obtained, the error is further reduced, the accuracy of the perspective transformation matrix is improved, and therefore the precision of the robot reaching the position where the object is grabbed can be improved.
For specific limitations of the robot eye calibration system, reference may be made to the above limitations of the robot eye calibration method, which are not described herein again. All or part of the modules in the robot eye calibration system can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a robot hand-eye calibration method.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a first machine coordinate, a second machine coordinate and a calibration pixel coordinate, wherein the first machine coordinate is a movement control coordinate of the robot when the tail end of the calibration jig is aligned with the feature point along a standard direction, the standard direction is a normal direction of a flange plate installed on the robot, the second machine coordinate is a movement control coordinate of the robot when the feature point is located at a specified position in a camera visual field, and the calibration pixel coordinate is a position coordinate of the feature point in the camera visual field when the feature point moves to the specified position in the camera visual field along with the robot;
acquiring a perspective transformation matrix according to the first machine coordinate, the second machine coordinate and the calibration pixel coordinate;
and acquiring a mapping relation between the movement control coordinate of the robot and the position coordinate in the field of view of the camera according to the perspective transformation matrix, and converting the pixel coordinate of the target point into a target machine coordinate according to the mapping relation, wherein the pixel coordinate of the target point comprises the position coordinate of the target point in the camera, and the target machine coordinate is the movement control coordinate corresponding to the movement of the robot to the target point.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring initial machine coordinates and pixel coordinates of a target point, wherein the initial machine coordinates are movement control coordinates when a camera view of the target point is acquired; and acquiring the coordinates of the target machine according to the initial machine coordinates, the pixel coordinates of the target point and the mapping relation.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a third machine coordinate according to the first machine coordinate and the second machine coordinate, wherein the third machine coordinate is a coordinate of the second machine coordinate projected on a plane where the first machine coordinate is located; and acquiring a perspective transformation matrix according to the third machine coordinate and the calibration pixel coordinate.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to Qi=PiA obtains perspective transformation matrix, wherein i is serial number of designated position, QiIs the ith third machine coordinate, PiAnd A is a perspective transformation matrix for the ith calibration pixel coordinate.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to
Obtaining a third machine coordinate, wherein i is a serial number of the designated position, and Q
i(x
i,y
i) Is the ith third machine coordinate, x
iAnd y
iRespectively the abscissa and ordinate, vx, of the ith third machine coordinate
i、vy
iAnd vr
iRespectively the abscissa, ordinate and rotation coordinate, x, of the ith second machine coordinate
0And y
0Respectively the abscissa and ordinate of the first machine coordinate.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
establishing a perspective transformation equation according to the third machine coordinate and the calibration pixel coordinate; the perspective transformation matrix is solved according to the perspective transformation equation.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining
The transformation relation between the coordinate of the third machine and the coordinate of the calibration pixel is shown, wherein i is the serial number of the designated position, and x
iAnd y
iRespectively the abscissa and ordinate of the ith third machine coordinate, z
iSatisfies z
i=a
13u
i+a
23w
i+a
33,u
iAnd w
iRespectively an abscissa and an ordinate of the coordinate of the ith calibration pixel,
for perspective transformation of matrix, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Elements of a perspective transformation matrix are respectively;
obtaining from a transformation relation
And
wherein the content of the first and second substances,
and
for perspective transformation equation, i is the number of the specified location, x
iAnd y
iRespectively the abscissa and ordinate, u, of the ith third machine coordinate
iAnd w
iRespectively the abscissa and ordinate of the ith calibration pixel coordinate, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring initial machine coordinates and pixel coordinates of a target point; according to
Obtaining target machine coordinates, where x
fAnd y
fRespectively the abscissa and ordinate of the target machine coordinate, u
tAnd w
tRespectively the abscissa and the ordinate of the pixel coordinate of the target point, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix, x
t、y
tAnd r
tRespectively the abscissa, ordinate and rotation coordinate of the initial machine coordinate.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to
Obtaining a perspective transformation matrix that minimizes the value, where i is the serial number of the designated location, x
iAnd y
iRespectively the abscissa and ordinate, u, of the ith third machine coordinate
iAnd w
iRespectively the abscissa and ordinate of the ith calibration pixel coordinate, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a first machine coordinate, a second machine coordinate and a calibration pixel coordinate, wherein the first machine coordinate is a movement control coordinate of the robot when the tail end of the calibration jig is aligned with the feature point along a standard direction, the standard direction is a normal direction of a flange plate installed on the robot, the second machine coordinate is a movement control coordinate of the robot when the feature point is located at a specified position in a camera visual field, and the calibration pixel coordinate is a position coordinate of the feature point in the camera visual field when the feature point moves to the specified position in the camera visual field along with the robot;
acquiring a perspective transformation matrix according to the first machine coordinate, the second machine coordinate and the calibration pixel coordinate;
and acquiring a mapping relation between the movement control coordinate of the robot and the position coordinate in the field of view of the camera according to the perspective transformation matrix, and converting the pixel coordinate of the target point into a target machine coordinate according to the mapping relation, wherein the pixel coordinate of the target point comprises the position coordinate of the target point in the camera, and the target machine coordinate is the movement control coordinate corresponding to the movement of the robot to the target point.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring initial machine coordinates and pixel coordinates of a target point, wherein the initial machine coordinates are movement control coordinates when a camera view of the target point is acquired; and acquiring the coordinates of the target machine according to the initial machine coordinates, the pixel coordinates of the target point and the mapping relation.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a third machine coordinate according to the first machine coordinate and the second machine coordinate, wherein the third machine coordinate is a coordinate of the second machine coordinate projected on a plane where the first machine coordinate is located; and acquiring a perspective transformation matrix according to the third machine coordinate and the calibration pixel coordinate.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to Qi=PiA obtains perspective transformation matrix, wherein i is serial number of designated position, QiIs the ith third machine coordinate, PiAnd A is a perspective transformation matrix for the ith calibration pixel coordinate.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to
Obtaining a third machine coordinate, wherein i is a serial number of the designated position, and Q
i(x
i,y
i) Is the ith third machine coordinate, x
iAnd y
iRespectively the abscissa and ordinate, vx, of the ith third machine coordinate
i、vy
iAnd vr
iRespectively the abscissa, ordinate and rotation coordinate, x, of the ith second machine coordinate
0And y
0Respectively the abscissa and ordinate of the first machine coordinate.
In one embodiment, the computer program when executed by the processor further performs the steps of:
establishing a perspective transformation equation according to the third machine coordinate and the calibration pixel coordinate; the perspective transformation matrix is solved according to the perspective transformation equation.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining
The transformation relation between the coordinate of the third machine and the coordinate of the calibration pixel is shown, wherein i is the serial number of the designated position, and x
iAnd y
iRespectively the abscissa and ordinate of the ith third machine coordinate, z
iSatisfies z
i=a
13u
i+a
23w
i+a
33,u
iAnd w
iRespectively an abscissa and an ordinate of the coordinate of the ith calibration pixel,
for perspective transformation of matrix, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Elements of a perspective transformation matrix are respectively;
obtaining from a transformation relation
And
wherein the content of the first and second substances,
and
for perspective transformation equation, i is the number of the specified location, x
iAnd y
iRespectively the abscissa and ordinate, u, of the ith third machine coordinate
iAnd w
iRespectively the abscissa and ordinate of the ith calibration pixel coordinate, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring initial machine coordinates and pixel coordinates of a target point; according to
Obtaining target machine coordinates, where x
fAnd y
fRespectively the abscissa and ordinate of the target machine coordinate, u
tAnd w
tRespectively the abscissa and the ordinate of the pixel coordinate of the target point, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix, x
t、y
tAnd r
tRespectively the abscissa, ordinate and rotation coordinate of the initial machine coordinate.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to
Obtaining a perspective transformation matrix that minimizes the value, where i is the serial number of the designated location, x
iAnd y
iRespectively the abscissa and ordinate, u, of the ith third machine coordinate
iAnd w
iRespectively the abscissa and ordinate of the ith calibration pixel coordinate, a
11、a
12、a
13、a
21、a
22、a
23、a
31、a
32And a
33Respectively, elements of the perspective transformation matrix.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.