CN115194774A - Binocular vision-based control method for double-mechanical-arm gripping system - Google Patents

Binocular vision-based control method for double-mechanical-arm gripping system Download PDF

Info

Publication number
CN115194774A
CN115194774A CN202211032772.4A CN202211032772A CN115194774A CN 115194774 A CN115194774 A CN 115194774A CN 202211032772 A CN202211032772 A CN 202211032772A CN 115194774 A CN115194774 A CN 115194774A
Authority
CN
China
Prior art keywords
arm
target object
image
mask
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211032772.4A
Other languages
Chinese (zh)
Inventor
方若愚
蔡骋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN202211032772.4A priority Critical patent/CN115194774A/en
Publication of CN115194774A publication Critical patent/CN115194774A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a control method of a double-mechanical-arm grasping system based on multi-view vision, which comprises the following steps: building a humanoid double mechanical arm, wherein the humanoid double mechanical arm is provided with a plurality of joint steering engines and a development board for controlling the working state of each joint steering engine; constructing a trinocular vision recognition analysis system, wherein the trinocular vision recognition analysis system comprises a trinocular vision camera for acquiring different visual images of a target object, the trinocular vision camera is connected to a processor, and the processor is used for carrying out image recognition analysis and calculating and outputting control instructions of joint steering engines of the two mechanical arms; the processor transmits the control instruction to the development board, and then correspondingly controls the two mechanical arms to complete the grabbing operation of the target object. Compared with the prior art, the invention can realize the accurate positioning and identification of the target object, and can control the two mechanical arms to carry out efficient and accurate single-arm gripping operation or double-arm cooperative gripping operation aiming at different target objects.

Description

Binocular vision-based control method for double-mechanical-arm gripping system
Technical Field
The invention relates to the technical field of double-mechanical-arm control, in particular to a control method of a double-mechanical-arm gripping system based on multi-view vision.
Background
With the rapid development of industrial automation, mechanical arms are widely applied to the industrial field at present, some simple assembling and manufacturing work is mainly completed through mechanical arms with 3-4 degrees of freedom, most industrial mechanical arms are used for assembly line work, the working process of the industrial mechanical arms is monotonous and repeated operation completed based on a set program, and autonomy and decision-making are relatively lacked.
Therefore, the industrial robot arm is difficult to be applied to a complex daily life scene, such as a household service robot. In the prior art, a laser ranging and positioning mode is adopted to position a target object, and control over a mechanical arm is combined, so that the mechanical arm can move to the position of the target object and perform corresponding operation. However, the cost of the laser range radar is higher, and the more the line bundles are, the higher the price is, for example, the price of the 64-line laser radar is up to several tens of thousands yuan; the range-finding space range of the laser range-finding radar is limited, and the situations of positioning loss and errors are easy to occur in the environment with large dynamic change; in addition, the mechanical arm is difficult to perform corresponding operations for different targets, resulting in lower operation efficiency and operation accuracy.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned defects in the prior art, and provides a method for controlling a dual-robot gripping system based on multi-vision, so as to achieve accurate positioning and identification of a target object, and enable the dual-robot to perform efficient and accurate operation on the target object.
The purpose of the invention can be realized by the following technical scheme: a control method of a double-mechanical-arm grasping system based on multi-view vision comprises the following steps:
s1, building an anthropomorphic double mechanical arm, wherein the anthropomorphic double mechanical arm is provided with a plurality of joint steering engines and a development board for controlling the working state of each joint steering engine;
s2, constructing a trinocular visual recognition analysis system, wherein the trinocular visual recognition analysis system comprises a trinocular visual camera for acquiring different visual images of a target object, the trinocular visual camera is connected to a processor, and the processor is used for carrying out image recognition analysis and calculating and outputting control instructions of steering engines of joints of the two mechanical arms;
and S3, the processor transmits the control instruction to the development board, and then correspondingly controls the two mechanical arms to complete the grabbing operation of the target object.
Further, the step S1 specifically includes the following steps:
s11, dividing the design of the mechanical arm into a shoulder joint, an elbow joint and a wrist joint according to the division of human joints;
in order to simulate the motion of human arms, 2 joint steering engines are arranged on each joint to share 2 degrees of freedom;
s12, calculating a torsion value corresponding to each joint steering engine according to a torque calculation formula and the measured joint length so as to complete model selection of the joint steering engines;
and S13, correspondingly connecting each joint steering engine to the development board so as to receive corresponding control instructions from the development board.
Further, the joint steering engine specifically adopts a magnetic encoding longitudinal steering engine.
Further, the development board is specifically a URT-1 development board.
Further, the specific working process of the processor in step S2 includes:
s21, performing example segmentation on the image acquired by the trinocular vision camera to realize identification of the target object in the image and extract a mask of the target to be grabbed;
s22, calculating the centroid pixel coordinates of the target object by using the first moment of the image according to the extracted mask;
calculating the direction angle between the main shaft of the target object and the main shaft by using the second moment of the image;
s23, converting the pixel coordinate of the mass center of the target object into a space coordinate by using a space coordinate conversion relation of the trinocular stereo vision, and calculating the position of a gripping point which needs to be cooperatively gripped by the two mechanical arms according to the direction angle of the main shaft;
and S24, constructing a reverse kinematics table, and calculating to obtain a control instruction of each joint steering engine of the double mechanical arms by combining a master-slave control principle.
Further, in the step S21, a Mask-RCNN algorithm is specifically adopted to perform example segmentation on the image acquired by the trinocular vision camera:
displaying a confidence frame of a target object, mask branches in the Mask-RCNN and a Mask covering the target in the image processed by the Mask-RCNN algorithm;
deriving a mask covering the target object in the mask branch, modifying the color of a mask matrix to be white 255, and setting a non-target object to be black 0;
deriving a mask matrix into a jpg format;
carrying out image preprocessing operation on the processed jpg picture by utilizing an Open-cv built-in library;
carrying out gray level processing on the preprocessed picture, then carrying out binarization processing on the gray level image, and setting a threshold value to enable the mask part of the image to be in a complete connected state and enable the target object and the background in the image to have obvious segmentation.
Further, the step S22 specifically includes the following steps:
s221, calculating the mass center of the target under the two-dimensional plane according to the mask wrapping the target object by using the image moment:
Figure BDA0003817766650000031
wherein x is i Is the ith point on the mask, and n is the total number of points on the mask;
s222, comprehensively expressing the binary image pixels as m 0 0, calculating the first moment m of the target mask image 1 0 and m 0 1, to determine the center of the gray scale image:
Figure BDA0003817766650000032
m 0 0=∑ xy f(x,y)m 1 0=∑ xy xf(x,y),m 0 1=∑ xy yf(x,y)
s223, when the target object laid down is grabbed, the principal axis and the principal axis direction angle of the target object are calculated by using the image second moment:
m 2 0=∑ xy x 2 f(x,y),m 0 2=∑ xy y 2 f(x,y),m 1 1=∑ xy xyf(x,y)
Figure BDA0003817766650000033
the rotation angle of the end effector of the two mechanical arms can be determined by using the main shaft of the target object and the direction angle of the main shaft.
Further, the step S23 specifically includes the following steps:
s231, obtaining camera internal parameters:
acquiring a calibration image, enabling the view field of the camera to cover the whole direction of an area to be grabbed by adjusting the angle of the camera, and then adjusting the focal length and the brightness of the camera;
shooting images of a plurality of calibration plates from cameras with three visual angles respectively, and simultaneously ensuring the set position and posture of the calibration plate in each image;
solving an internal reference matrix of the camera according to the dimension value of the real calibration plate;
s232, after camera internal parameters are marked, coordinate conversion is carried out by utilizing three-eye stereoscopic vision:
the three-eye stereo vision model consists of three cameras, the optical axes of the three cameras respectively form a set angle, and C is used 1 、C 2 、C 3 Three cameras project on an imaging plane;
ideally, C 1 And C 2 Measured value of the composed binocular System, C 1 And C 3 Measured values of a composed binocular System and C 2 And C 3 The measured values of the formed binocular system should coincide with the actual coordinates of the measured point P, i.e. O 1 P 1 、O 2 P 2 、O 3 P 3 The point P intersected in the space is the measured point P, but in a real scene, a binocular system has certain error, and the measured value P 1 、P 2 、P 3 Not completely coincident with the actual coordinates of the point P to be measured, but intersect at three different points in space, namely O 1 P 1 、O 2 P 2 、O 3 P 3 The points which are intersected pairwise are obtained through a binocular vision positioning algorithm;
based on the error of the binocular vision measuring system, the three-dimensional coordinate of the measured object is solved by utilizing the three-eye vision fusion, and the coordinate conversion expression is changed into a homogeneous conversion form:
Figure BDA0003817766650000041
let the measured coordinate of the P point be P = (X) w ,Y w ,Z w ) And the optimal estimated value meets the objective function:
F=min(‖P-P 1 ‖+‖P-P 2 ‖+‖P‖)
the P point coordinate determined by the method is the optimal estimation of the real coordinate of the P point.
Further, the inverse kinematics table in the step S24 specifically adopts a Craig expression method, and combines with mathematical models of two mechanical arms to obtain a D-H parameter table, and the D-H parameter table is utilized to calculate the angle that each joint of the target point needs to rotate.
Further, the step S24 specifically includes the following steps:
s241, setting one mechanical arm as a main arm and the other mechanical arm as a slave arm, planning the motion track of the main arm in advance according to a control target, enabling the slave arm to move along with the main arm, enabling the main arm and the slave arm to meet a closed-chain constraint relation, and deriving the motion track of the slave arm according to the motion constraint relation;
s242, setting the target object as Master and the two arms as Slaver according to the relation of the coordinate system, and moving the two arms along with the target object by adopting a Master-slave planning mode according to the constraint relation between the tail ends of the two arms and the target object: firstly, planning object tracks, respectively obtaining tail end tracks of the two mechanical arms according to a constraint equation, smoothing the tail end tracks in joint space, and finally sending track control instructions to the mechanical arms for execution.
Compared with the prior art, the invention provides a binocular vision-based double-mechanical-arm gripping system and a control method thereof, through constructing the anthropomorphic double mechanical arm and constructing the trinocular vision recognition analysis system, different visual images of a target object are acquired by utilizing the trinocular vision recognition analysis system, and through carrying out recognition analysis on the images, on one hand, the target object can be recognized and accurately positioned, and on the other hand, a control instruction corresponding to the anthropomorphic double mechanical arm can be calculated and output, so that the double mechanical arms can efficiently and accurately realize gripping operation on the target object, and the application field of the mechanical arms can be expanded to more complex daily life service scenes.
Aiming at an acquired image, firstly, identifying a target and covering a target object Mask by using a Mask-RCNN (Mask-RCNN) of an example segmentation algorithm; then, calculating the pixel coordinate of the centroid of the target object through the first moment of the image, and calculating the main shaft and the direction angle of the main shaft of the target object through the second moment; converting the pixel coordinate of the target object into a world coordinate in space by using a trinocular visual space coordinate conversion relation; after the world coordinates of the target object are obtained, the control instruction design of single-arm gripping and double-arm cooperative gripping of the target object is completed by utilizing the inverse kinematics and the closed-chain kinematics. Therefore, the double mechanical arms have autonomy and decision-making performance, and can perform corresponding operation on the identified target object, for example, if the target object which can be directly gripped by a single arm is identified, the single arm is controlled to grip; and if the target object needing to be grasped by the two arms in a coordinated manner is identified, controlling to grasp by the two arms in a coordinated manner.
The invention carries out positioning based on vision, namely, the external sensors are only three cameras with low cost, and compared with a laser radar mode, the cost is greatly reduced; compared with the common binocular vision positioning at present, the invention adopts three cameras and utilizes the three-eye vision space conversion relation to position the target object, has wider visual angle compared with the binocular vision, and in addition, under the condition that one camera is damaged, the other two cameras can still form a binocular vision system to complete positioning, thereby having more reliability.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2a is a schematic view of a joint of a human arm;
FIG. 2b is a schematic diagram of an anthropomorphic robot arm constructed in an embodiment;
FIG. 3 is a schematic diagram of an embodiment of an anthropomorphic two robot;
FIG. 4 is a diagram illustrating an exemplary segmentation detection effect according to an embodiment;
FIG. 5 is a schematic diagram of centroid extraction in the example;
FIG. 6 is a schematic view of the principal axis and the direction angle of the principal axis in the embodiment;
FIG. 7a is a schematic diagram of an embodiment of an image of a calibration plate;
FIG. 7b is a diagram illustrating the calibration effect of the camera according to the embodiment;
FIG. 8 is a diagram illustrating a rotation matrix R in the camera internal reference calibration process according to an embodiment;
FIG. 9 is a schematic view of a three-eye stereovision;
FIG. 10 is a schematic diagram of a two-arm cooperative closed-chain kinematic model;
FIG. 11a is a schematic diagram illustrating the effect of a robot arm in gripping a plastic bottle lying down;
FIG. 11b is a schematic diagram illustrating the gripping effect of a single robot arm for vertically placing a plastic bottle in the embodiment;
fig. 12a and 12b are schematic views illustrating the effect of the double mechanical arms in gripping the umbrella in the embodiment;
fig. 13a and 13b are schematic diagrams illustrating the effect of gripping eggs by a single mechanical arm in the embodiment.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
As shown in fig. 1, a method for controlling a double-robot gripping system based on multi-view vision includes the following steps:
s1, constructing a humanoid double mechanical arm, wherein the humanoid double mechanical arm is provided with a plurality of joint steering engines and a development board for controlling the working states of the joint steering engines;
s2, constructing a trinocular visual recognition analysis system, wherein the trinocular visual recognition analysis system comprises a trinocular visual camera for acquiring different visual images of a target object, the trinocular visual camera is connected to a processor, and the processor is used for carrying out image recognition analysis and calculating and outputting control instructions of steering engines of joints of the two mechanical arms;
and S3, the processor transmits the control instruction to the development board, and then correspondingly controls the two mechanical arms to complete the grabbing operation of the target object.
The embodiment adopts the above technical solution, and mainly includes:
1. an anthropomorphic double mechanical arm is built, a magnetic coding bus steering engine is adopted as a joint steering engine of the mechanical arm in the embodiment, and a torsion numerical value of each joint steering engine is calculated according to a torque calculation formula so as to select the type of the joint steering engine; in the embodiment, the URT-1 development board is adopted to realize the control of the whole arm of the mechanical arm.
2. A trinocular vision system is built, and the embodiment adopts an example segmentation algorithm (Mask-RCNN) to realize the identification of the target and the covering of the Mask;
aiming at the extracted mask, calculating the centroid pixel coordinate of the target object by using the first moment of the image, and calculating the direction angle between the main shaft and the main shaft of the target object by using the second moment of the image;
and converting the pixel coordinate of the centroid of the target object into a space coordinate by using a space coordinate conversion relation of the trinocular stereo vision, and further calculating a gripping point needing two-arm cooperative gripping according to the calculated direction angle of the main shaft to realize the two-arm cooperative gripping.
3. According to the built mechanical arm, a D-H parameter table of the mechanical arm is built, grabbing of the target object is achieved through reverse kinematics, and meanwhile double-arm cooperative grabbing of the target object is achieved through double-arm closed-chain kinematics.
Specifically, the present embodiment is designed to completely imitate a human arm (as shown in fig. 2 a) when the anthropomorphic type double mechanical arm is built. The palm part of the double arms of the human body is removed, and the double arms have 6 degrees of freedom, in the technical scheme, the mechanical arms are divided according to joints of the arms of the human body, and as shown in 2b, the design of the mechanical arms is divided into shoulder joints, elbow joints and wrist joints. In addition, in order to accurately simulate the motion of human arms, 2 steering engines are arranged on each joint to share 2 degrees of freedom; and according to the torque calculation formula, the types of the steering engines of different joints are selected, so that the target object can be still grabbed under the condition of bearing the limit torque. Wherein, the torque calculation formula is as follows:
t = F D (i.e. torque = force moment length)
And according to the measurement of the length of the joint and the torque calculation, the model selection results of the torques corresponding to different joint steering engines can be obtained. In the embodiment, the target object does not exceed 1kg, and it is required to ensure that 500g of the target object can be gripped under the condition of the extreme bearing of the mechanical arm, and the double mechanical arms built in the embodiment are shown in fig. 3.
In this embodiment, when the target centroid and the main axis direction angle are obtained, the following contents are mainly included:
and (3) performing example segmentation by using Mask-RCNN aiming at the pictures collected by the trinocular vision camera. And extracting a mask of the target to be grabbed according to the identified category, and performing image preprocessing operations such as binarization graying operation and the like on the extracted mask by utilizing Open-cv. White in the processed target image is a connected part, and the centroid of the target object is extracted (the centroid of the target object is extracted by using the image moment in digital processing), namely the centroid of the connected image part is extracted.
(1) As shown in FIG. 4, the image processed by the Mask-RCNN algorithm shows the confidence box of the target object and the Mask branch in the Mask-RCNN and the Mask covering the target. And (4) deriving a mask covering the target object in the mask branch, modifying the color of the mask matrix to be white 255, and setting the color of the non-target object to be black 0. And exporting the mask matrix into a jpg format for later processing.
(2) And carrying out image preprocessing operation by utilizing an Open-cv built-in library corresponding to the processed jpg picture. And (4) realizing by using a media blend () function, carrying out gray level processing on the preprocessed picture, and then carrying out binarization processing on the picture. And setting a threshold value to ensure that the mask part of the image is in a complete connected state, and simultaneously, obviously segmenting the target object and the background in the image.
(3) The extraction of the connected part centroid is mainly used for digital image processing and the application of image moments in the computer vision related field. Image moments refer to a weighted average (moment) of the gray levels of certain specific pixels of an image, or an attribute of an image having a similar function or meaning. Image moments are typically used to describe segmented image objects, from which part of the properties of the image, including area (overall brightness), and information about geometric center and orientation, can be obtained.
According to the technical scheme, the centroid of the target under the two-dimensional plane is obtained by using the image moment and according to the mask wrapping the target object. Suppose that the processed mask consists of n points x i Then the centroid of the object is given by:
Figure BDA0003817766650000071
the binary image pixel is comprehensively expressed as m 0 0, calculating the first moment m of the target mask image 1 0 and m 0 1 to determine the center of the gray level image:
m 0 0=∑ xy f(x,y)m 1 0=∑ xy xf(x,y),m 0 1=∑ xy yf(x,y) (2)
Figure BDA0003817766650000081
finding out the coordinates of the target two-dimensional image as
Figure BDA0003817766650000082
When a target object placed in a lying state is grabbed by using the two fingers, the principal axis and the principal axis direction angle of the target object need to be obtained so as to calculate the rotation angle of the end effector. The technical scheme utilizes the secondary moment of the image to calculate the main shaft and the direction angle of the main shaft of a target object:
m 2 0=∑ xy x 2 f(x,y),m 0 2=∑ xy y 2 f(x,y),m 1 1=∑ xy xyf(x,y) (4)
the tilt angle (i.e., the principal axis direction angle) of the target object is calculated as:
Figure BDA0003817766650000083
the results of finding the centroid and the direction angle are shown in fig. 5 and 6.
And for a target object needing to be gripped by two arms, calculating the pixel coordinate of a gripping point according to the mass center and the direction angle of the main shaft, and converting the pixel coordinate into a world coordinate for gripping through a space coordinate conversion relation.
In the embodiment, when the space coordinate conversion is performed, the camera is used for recognizing and positioning the target in the mechanical arm grabbing operation process. According to the technical scheme, the world coordinates of the grasping points are calculated by utilizing the trinocular vision, namely the camera calibration and the trinocular stereo vision. The specific process is as follows:
in this embodiment, an MATLAB toolkit is selected to calibrate the camera internal parameters, and the camera calibration adopts a Zhang friend calibration method and starts the camera calibration by using a calibretecarama () function. The first step is to obtain camera intrinsic parameters: the method comprises the steps of collecting a calibration image, firstly manually adjusting the angle of a camera to enable the visual field of the camera to cover the whole desktop to be grabbed, and then adjusting the focal length and the brightness of the camera to enable the image to achieve better quality. As shown in fig. 7a and 7b, 20 pictures of the calibration plates are taken from cameras of three viewing angles, and the calibration plates in each picture should have the poses different as much as possible, and then the internal reference matrix of the camera is obtained according to the size value of the real calibration plate.
Deriving camera internal parameters:
Figure BDA0003817766650000084
the internal parameters of the three cameras are respectively as follows:
Figure BDA0003817766650000085
Figure BDA0003817766650000086
Figure BDA0003817766650000087
the external reference matrix is obtained by PNP algorithm, and includes a rotation matrix R (as shown in fig. 8) and a translation matrix T.
The binocular camera has a larger field of view than the binocular camera, and can ensure that the binocular vision formed by the other two cameras can still be used for spatial positioning under the condition that one camera is damaged.
And after the camera internal reference is marked, coordinate conversion is carried out by utilizing the trinocular stereo vision. As shown in fig. 9, the model for three-eye stereoscopic vision is composed of three cameras, the optical axes of the three cameras are respectively at a certain angle, and the three-eye camera stereoscopic vision can also be regarded as three visual models. By C 1 、C 2 、C 3 Three cameras project on the imaging plane. In the ideal case, C 1 And C 2 Measured value of the composed binocular System, C 1 And C 3 Measured values of a composed binocular System and C 2 And C 3 The measured values of the formed binocular system should coincide with the actual coordinates of the measured point P, i.e. O 1 P 1 、O 2 P 2 、O 3 P 3 Intersecting a point P in space, i.e. the measured point P. However, in a real scene, a certain error exists in the binocular system, and a measured value P is obtained 1 、P 2 、P 3 Not completely coincident with the actual coordinates of the point P to be measured, but intersect at three different points in space, namely O 1 P 1 、O 2 P 2 、O 3 P 3 And the points intersected pairwise are determined by a binocular vision positioning algorithm.
Based on the error of the binocular vision measuring system, the technical scheme solves the spatial three-dimensional coordinate of the measured object by utilizing the trinocular vision fusion. The coordinate conversion expression becomes a homogeneous conversion form:
Figure BDA0003817766650000091
let the measured coordinate of the P point be P = (X) w ,Y w ,Z w ) And the optimal estimated value meets the objective function:
F=min(‖P-P 1 ‖+‖P-P 2 ‖+‖P‖) (11)
the coordinates of the P point determined by the above equation are considered to be the optimal estimate of the true coordinates of the P point.
In constructing the inverse kinematics table, the present embodiment takes into account that inverse kinematics is the process of determining the parameters for setting the articulating movable object to achieve the desired pose. The mechanical arm is composed of objects connected by joints and a group of rigid segments connected by the joints. Changing the angles of the joints can produce infinite shapes, the solution of the forward kinematics problem is the posture of the object given the angles, and the reverse kinematics problem is relatively difficult, mainly because the given posture of the object is explained, and the angle of each joint needing to move needs to be found given the three-dimensional coordinates of the end effect effector in space.
In the embodiment, the D-H parameters constructed by the Craig expression method according to the abstract mathematical model of the mechanical arm are shown in the following table.
Figure BDA0003817766650000092
Figure BDA0003817766650000101
According to the constructed D-H parameter table, the angle of each joint required to rotate when reaching a target point can be calculated: (theta. Providing a sufficient balance between the values 1 ,θ 2 ,θ 3 ,θ 4 ,θ 5 ,θ 6 )。
In the embodiment, when dual-arm cooperative control is performed, a common method for dual-arm cooperative operation is a Master-slave control method, and the principle of the method is to set one mechanical arm as a Master arm (Master) and the other mechanical arm as a slave arm (slave), plan a motion trajectory of the Master arm in advance according to a control target, make the slave arm follow the Master arm to move, make the Master-slave arm satisfy a closed-chain constraint relationship, and derive a motion trajectory of the slave arm according to the motion constraint relationship.
According to the relationship of the coordinate system, the object is set as Master and the two arms are set as slave in the embodiment. And (3) adopting a master-slave planning mode, and enabling the double arms to move along with the object according to the constraint relation between the tail ends of the double arms and the object. Firstly, planning the object track, and respectively solving the tail end tracks of the left arm and the right arm according to a constraint equation. And smoothing is carried out in the joint space, and finally the track is sent to the mechanical arm to be executed.
As shown in fig. 10, the dual-arm cooperation operation is mainly based on the type of tight-fit operation, i.e. when the dual arms are gripping a rigid body, the two end grippers do not move relative to the gripped object. A closed kinematic chain is formed between the double arms and the grasped object, and the movement of the double arms is correspondingly restrained and keeps a certain kinematic relationship.
According to the coordinate conversion relation, a transformation matrix of the two-arm end effector and the operation target substance center relative to the coordinate origin can be obtained as follows:
Figure BDA0003817766650000102
Figure BDA0003817766650000103
in the formula (I), the compound is shown in the specification,
Figure BDA0003817766650000104
and
Figure BDA0003817766650000105
are constant matrixes, and a pose constraint relation between two end effectors can be obtained:
Figure BDA0003817766650000106
Figure BDA0003817766650000107
in the formula (I), the compound is shown in the specification,
Figure BDA0003817766650000108
and
Figure BDA0003817766650000109
a position matrix of the tail ends of the two arms relative to the origin of coordinates;
Figure BDA00038177666500001010
and
Figure BDA00038177666500001011
is a rotation matrix of the two arm ends with respect to the origin of coordinates.
Since the two-arm ends are equal in velocity regardless of the relative motion between the two arms, the relationship of velocity constraints to the two-arm ends is possible:
Figure BDA00038177666500001012
Figure BDA0003817766650000111
wherein J (q) L ),J(q R ) Jacobian matrices representing the left and right arms, respectively, J v (q),J w (q) respectively representing a position Jacobian matrix and an attitude Jacobian matrix of the robot arm. One common method in maintaining coordinated operation of a dual-arm system is the relative Jacobian matrix, which allows the dual-arm system to be considered as a unique redundant manipulator with a number of joints equal to the sum of the relative joints of each manipulator. Furthermore, a two-arm system modeled with a relative Jacobian matrix can be controlled with the same algorithm as a single-arm system. Then, the relative Jacobian matrix is generalized to JacobianZero space protection to achieve cooperative tasks.
Based on the solved Jacobian matrix, the relevant control commands of the rotation angle, the speed and the acceleration corresponding to each shutdown steering engine can be solved. The processor transmits the control instructions to each joint steering engine through the development board, and then the double mechanical arms can be controlled to complete the grabbing operation of the target object.
In this embodiment, gripping tests were performed for a plastic bottle placed in a lying position, a plastic bottle placed in an upright position, an umbrella, and an egg, respectively. As shown in fig. 11a and 11b, the calibrated trinocular camera performs example segmentation by using Mask-RCNN, a grasping target is a plastic bottle, a single mechanical arm is adopted to place the plastic bottle at any position within the range of two arms to reach, after the mass center is calculated and converted into a space coordinate, the mechanical arm moves to the position of a target object according to inverse kinematics to realize grasping action.
In daily life, there are some objects that are hard or impossible to grip with one hand, such as umbrellas, bags of paper towels. The single arm cannot complete the gripping, and the double arms are required to complete the gripping of the target object. Firstly, still performing example segmentation by Mask-RCNN to detect the umbrella and the big paper towel, then acquiring the mass center three-dimensional coordinate of the umbrella, selecting two-arm gripping points by a mechanical arm, moving the two arms along with the target object, and gripping the umbrella by the two arms in cooperation, wherein the effect is shown in fig. 12a and 12 b.
For fragile articles requiring torque control of the jaws, such as eggs. According to the embodiment, according to the torque feedback of the clamping jaws, the single arm is tried to grasp the egg, the space pose of the egg is identified, the torque of the clamping jaws is controlled, the raw egg is grasped, the torque of the grasping egg is grasped according to the experiment attempt, the egg is finally and successfully grasped (as shown in fig. 13a and 13 b), and meanwhile, the egg is prevented from being broken due to overlarge torque.
In conclusion, the technical scheme is based on the multi-view vision double-mechanical-arm grasping system, the mechanical arm with 6 degrees of freedom of the anthropomorphic hand is constructed, the mechanical arm is combined with a computer vision algorithm, a target object is identified by using an example segmentation algorithm, positioning is performed by using a three-view stereo vision, and different grasping modes, single-arm grasping and double-arm cooperative grasping are adopted for different target objects. The technical scheme breaks through the problem that the traditional mechanical arm can only complete single repeated operation, and meanwhile, the design of the anthropomorphic mechanical arm is adopted, so that a user can be helped to complete daily operations completed by human arms. By means of a visual algorithm and a mechanical arm kinematics combination mode, the mechanical arm adopts different grasping modes aiming at different targets. The design of the mechanical arm in the technical scheme is close to the real human arm, and the mechanical arm is also a preliminary prototype of the future anthropomorphic service robot.
The technical scheme can be applied to the service robot, and the scene includes the actions of helping human beings to complete the grabbing and carrying of the simple target object. A gripping instruction is given to the robot, so that a bottle of beverage is gripped to a specified position, or an umbrella is gripped to the specified position. The mechanical arm identifies the type and the position of the target object according to an identification algorithm, and autonomously makes a decision to realize single-arm and double-arm grasping.

Claims (10)

1. A control method of a double-mechanical-arm grasping system based on multi-view vision is characterized by comprising the following steps:
s1, building an anthropomorphic double mechanical arm, wherein the anthropomorphic double mechanical arm is provided with a plurality of joint steering engines and a development board for controlling the working state of each joint steering engine;
s2, constructing a trinocular vision recognition analysis system, wherein the trinocular vision recognition analysis system comprises a trinocular vision camera for acquiring different visual images of a target object, the trinocular vision camera is connected to a processor, and the processor is used for carrying out image recognition analysis and calculating and outputting control instructions of joint steering engines of the double mechanical arms;
and S3, the processor transmits the control instruction to the development board, and then the two mechanical arms are correspondingly controlled to complete the grabbing operation of the target object.
2. The binocular vision based double-robot arm gripping system control method according to claim 1, wherein the step S1 specifically comprises the following steps:
s11, dividing the design of the mechanical arm into a shoulder joint, an elbow joint and a wrist joint according to the division of human joints;
in order to simulate the motion of human arms, 2 joint steering engines are arranged on each joint to share 2 degrees of freedom;
s12, calculating a torsion value corresponding to each joint steering engine according to a torque calculation formula and the measured joint length so as to complete model selection of the joint steering engines;
and S13, correspondingly connecting each joint steering engine to the development board so as to receive corresponding control instructions from the development board.
3. The binocular vision based double-manipulator gripping system control method according to any one of claims 1 to 2, wherein the joint steering engine specifically adopts a magnetically encoded longitudinal steering engine.
4. The binocular vision based double-robot arm gripping system control method according to any one of claims 1 to 2, wherein the development board is specifically a URT-1 development board.
5. The binocular vision based double-robot gripping system control method according to claim 1, wherein the specific working process of the processor in the step S2 includes:
s21, carrying out example segmentation on the image acquired by the trinocular vision camera to realize the identification of the target object in the image and extract a mask of the target to be captured;
s22, calculating the centroid pixel coordinates of the target object by using the first moment of the image according to the extracted mask;
calculating the direction angle between the main shaft of the target object and the main shaft by using the second moment of the image;
s23, converting the pixel coordinate of the mass center of the target object into a space coordinate by using a space coordinate conversion relation of the trinocular stereo vision, and calculating the position of a gripping point which needs to be cooperatively gripped by the two mechanical arms according to the direction angle of the main shaft;
and S24, constructing a reverse kinematics table, and calculating to obtain a control instruction of the steering engine of each joint of the double mechanical arms by combining a master-slave control principle.
6. The binocular vision based double-manipulator gripping system control method according to claim 5, wherein the step S21 is specifically to perform example segmentation on the image acquired by the binocular vision camera by adopting a Mask-RCNN algorithm:
displaying a confidence frame of a target object, a Mask branch in the Mask-RCNN and a Mask with a covered target in the image processed by the Mask-RCNN algorithm;
deriving a mask covering the target object in the mask branch, modifying the color of a mask matrix to be white 255, and setting a non-target object to be black 0;
deriving a mask matrix into a jpg format;
carrying out image preprocessing operation on the processed jpg picture by utilizing an Open-cv built-in library;
carrying out gray level processing on the preprocessed picture, then carrying out binarization processing on the gray level image, and setting a threshold value to enable the mask part of the image to be in a complete connected state and enable the target object and the background in the image to have obvious segmentation.
7. The method for controlling a two-robot gripping system based on multi-view vision according to claim 6, wherein the step S22 specifically comprises the following steps:
s221, calculating the mass center of the target under the two-dimensional plane according to the mask wrapping the target object by using the image moment:
Figure FDA0003817766640000021
wherein x is i The ith point on the mask is n, and the total number of the points on the mask is n;
s222, comprehensively expressing the binary image pixels as m 0 0, calculating the first moment m of the target mask image 1 0 and m 0 1, to determine the center of the gray scale image:
Figure FDA0003817766640000022
Figure FDA0003817766640000023
s223, when the object lying down is grabbed, the principal axis and the principal axis direction angle of the object are calculated by using the second moment of the image:
Figure FDA0003817766640000024
Figure FDA0003817766640000025
the rotation angle of the end effector of the two mechanical arms can be determined by using the main shaft of the target object and the direction angle of the main shaft.
8. The method for controlling a two-robot gripping system based on multi-view vision according to claim 7, wherein the step S23 specifically includes the following steps:
s231, obtaining camera internal parameters:
acquiring a calibration image, enabling the view of the camera to cover the whole direction of an area to be grabbed by adjusting the angle of the camera, and then adjusting the focal length and the brightness of the camera;
shooting images of a plurality of calibration plates from cameras with three visual angles respectively, and simultaneously ensuring the set position and posture of the calibration plate in each image;
solving an internal reference matrix of the camera according to the dimension value of the real calibration plate;
s232, after camera internal parameters are marked, coordinate conversion is carried out by utilizing three-eye stereoscopic vision:
three-purpose standThe stereo vision model consists of three cameras, the optical axes of the three cameras respectively form set angles, and C is used 1 、C 2 、C 3 Three cameras project on an imaging plane;
ideally, C 1 And C 2 Measured value of the composed binocular System, C 1 And C 3 Measured values of a composed binocular System and C 2 And C 3 The measured values of the formed binocular system should coincide with the actual coordinates of the measured point P, i.e. O 1 P 1 、O 2 P 2 、O 3 P 3 The point P intersected in the space is the measured point P, but in a real scene, a certain error exists in a binocular system, and the measured value P 1 、P 2 、P 3 Not completely coincident with the actual coordinates of the point P to be measured, but intersect at three different points in space, namely O 1 P 1 、O 2 P 2 、O 3 P 3 The points which are intersected pairwise are obtained through a binocular vision positioning algorithm;
based on the error of the binocular vision measurement system, the three-dimensional space coordinates of the measured object are solved by utilizing the trinocular vision fusion, and the coordinate conversion expression is changed into a homogeneous conversion form:
Figure FDA0003817766640000031
let the measured coordinate of P point be P = (X) w ,Y w ,Z w ) And the optimal estimated value meets the objective function:
F=min(‖P-P 1 ‖+‖P-P 2 ‖+‖P‖)
the P point coordinate determined by the method is the optimal estimation of the real coordinate of the P point.
9. The binocular vision based control method of the two-robot gripping system of claim 5, wherein the inverse kinematics table in the step S24 is a Craig expression method combined with a mathematical model of the two robots to obtain a D-H parameter table, and the D-H parameter table is utilized to calculate the angle of each joint of the target point required to rotate.
10. The binocular vision based double-robot gripping system control method according to claim 5, wherein the step S24 specifically comprises the following steps:
s241, setting one mechanical arm as a master arm and the other mechanical arm as a slave arm, planning the motion track of the master arm in advance according to a control target, enabling the slave arm to move along with the master arm, enabling the master arm and the slave arm to meet a closed-chain constraint relation, and deducing the motion track of the slave arm according to the motion constraint relation;
s242, setting the target object as Master and the two arms as Slaver according to the relation of the coordinate system, and moving the two arms along with the target object by adopting a Master-slave planning mode according to the constraint relation between the tail ends of the two arms and the target object: firstly, planning object tracks, respectively obtaining tail end tracks of the two mechanical arms according to a constraint equation, smoothing the tail end tracks in joint space, and finally sending track control instructions to the mechanical arms for execution.
CN202211032772.4A 2022-08-26 2022-08-26 Binocular vision-based control method for double-mechanical-arm gripping system Pending CN115194774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211032772.4A CN115194774A (en) 2022-08-26 2022-08-26 Binocular vision-based control method for double-mechanical-arm gripping system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211032772.4A CN115194774A (en) 2022-08-26 2022-08-26 Binocular vision-based control method for double-mechanical-arm gripping system

Publications (1)

Publication Number Publication Date
CN115194774A true CN115194774A (en) 2022-10-18

Family

ID=83572193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211032772.4A Pending CN115194774A (en) 2022-08-26 2022-08-26 Binocular vision-based control method for double-mechanical-arm gripping system

Country Status (1)

Country Link
CN (1) CN115194774A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116330285A (en) * 2023-03-20 2023-06-27 深圳市功夫机器人有限公司 Mechanical arm control method and device, mechanical arm and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116330285A (en) * 2023-03-20 2023-06-27 深圳市功夫机器人有限公司 Mechanical arm control method and device, mechanical arm and storage medium

Similar Documents

Publication Publication Date Title
Kumra et al. Antipodal robotic grasping using generative residual convolutional neural network
CN111906784B (en) Pharyngeal swab double-arm sampling robot based on machine vision guidance and sampling method
Morales et al. Integrated grasp planning and visual object localization for a humanoid robot with five-fingered hands
CN111243017A (en) Intelligent robot grabbing method based on 3D vision
Hebert et al. Combined shape, appearance and silhouette for simultaneous manipulator and object tracking
JP2022542241A (en) Systems and methods for augmenting visual output from robotic devices
Melchiorre et al. Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach
CN114851201B (en) Mechanical arm six-degree-of-freedom visual closed-loop grabbing method based on TSDF three-dimensional reconstruction
CN113715016A (en) Robot grabbing method, system and device based on 3D vision and medium
CN114299039B (en) Robot and collision detection device and method thereof
CN117103277A (en) Mechanical arm sensing method based on multi-mode data fusion
JPH0780790A (en) Three-dimensional object grasping system
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
Yang et al. Visual servoing control of baxter robot arms with obstacle avoidance using kinematic redundancy
CN113793383A (en) 3D visual identification taking and placing system and method
Schiebener et al. Discovery, segmentation and reactive grasping of unknown objects
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
Ren et al. Vision based object grasping of robotic manipulator
KR102452315B1 (en) Apparatus and method of robot control through vision recognition using deep learning and marker
CN114700949A (en) Voxel grabbing network-based mechanical arm flexible grabbing planning method
Li A Design of Robot System for Rapidly Sorting Express Carton with Mechanical Arm Based on Computer Vision Technology
Shauri et al. Sensor integration and fusion for autonomous screwing task by dual-manipulator hand robot
Infantino et al. Visual control of a robotic hand
Hong et al. Research of robotic arm control system based on deep learning and 3D point cloud target detection algorithm
Cheng et al. Real-time robot end-effector pose estimation with deep network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination