CN105137973A - Method for robot to intelligently avoid human under man-machine cooperation scene - Google Patents

Method for robot to intelligently avoid human under man-machine cooperation scene Download PDF

Info

Publication number
CN105137973A
CN105137973A CN201510518563.4A CN201510518563A CN105137973A CN 105137973 A CN105137973 A CN 105137973A CN 201510518563 A CN201510518563 A CN 201510518563A CN 105137973 A CN105137973 A CN 105137973A
Authority
CN
China
Prior art keywords
robot
operator
coordinate system
joint
kinect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510518563.4A
Other languages
Chinese (zh)
Other versions
CN105137973B (en
Inventor
张平
杜广龙
金培根
高鹏
刘欣
李备
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201510518563.4A priority Critical patent/CN105137973B/en
Publication of CN105137973A publication Critical patent/CN105137973A/en
Application granted granted Critical
Publication of CN105137973B publication Critical patent/CN105137973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Manipulator (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for a robot to intelligently avoid human under a man-machine cooperation scene, comprising steps of constructing a robot column model of a simulation scene through robot D-H parameters, (2) obtaining RGB images of the real scene and skeleton data of an operator through Kinect, and constructing the column model of the simulation scene operator, 3) corresponding calibrations of the robot and the simulator in the real scene and the simulation scene , and 4) performing real-time detection collision of the robot model and the operator model. By using the method disclosed by the invention, the operator can freely move in the space where the robot operates, but the operator can avoid the obstacle according to the operator information obtained by the Kinect.

Description

Intelligent robot under a kind of man-machine collaboration scene hides people's class methods
Technical field
The present invention relates to the technical field of robot motion, the intelligent robot referred in particular under a kind of man-machine collaboration scene hides people's class methods.
Background technology
Intelligent robot intersects and fusion with other field, creates many scenes needing human and computer people to cooperate.Intelligent robot participates in the operating environment of people, and under the prerequisite ensureing work efficiency, the labour of all right effective liberate people, in particular cases can also replace people to complete high risk task.Under the background that human and computer people closely cooperates, how the safety of underwriter is vital problem.This section of invention proposes the modeling of robot under a kind of man-machine collaboration scene and keeps away the method for barrier, the method builds robot cylinder model according to robot D-H parameter, use Kinect sensor acquisition operations person bone site, build the cylinder model of operator with this, and identify with making intelligent robot according to the collision detection result of robot and operator and hide human body.
When building Robot Virtual scene, common way utilizes the modeling tools such as 3dsMax to build the virtual machine human model completely corresponding with the robot of real scene, but this method does not have a versatility, and modeling cost is relatively high, therefore builds robot model by the D-H parameter that robot is intrinsic and be necessary.
In man-machine collaboration scene, robot, according to the position of human body and action message, makes and dodging, and avoids colliding.But traditional robot collision avoidance research is mainly to the collision prevention of object, different from environmental objects, people possesses self architectural feature, and body state and motion also have unpredictability, and therefore robot needs to identify in real time human body.
Before Kinect occurs, the human body information acquisition technique based on many vision camera is relatively popular, but the method needs fluoroscopy images, and relative cost is also higher.In the many vision camera human body recognition technology based on mark, human body will wear specific clothes or equipment, is in limited state in active procedure, limits the application of this technology.Extract acquisition technique based on unmarked human body information then to need the priori such as human body image, three-dimensional information and objective function to mate, matching process needs enough sample sizes, otherwise identification accuracy can be caused lower, is also comparatively difficult to ensure card real-time.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, the intelligent robot under a kind of man-machine collaboration scene is provided to hide people's class methods, operator can free movement in robot manipulating task space, the information of the operator that robot obtains according to Kinect carry out initiatively keep away barrier.
For achieving the above object, technical scheme provided by the present invention is: the intelligent robot under a kind of man-machine collaboration scene hides people's class methods, comprises the following steps:
1) the robot cylinder model of virtual scene is built by robot D-H parameter, as follows:
Suppose that robot is made up of with arbitrary form a series of connecting rod and joint, D-H Parametric Representation be relation between joint of robot and connecting rod; In order to carry out modeling to robot, adopt point to carry out modeling to joint, right cylinder carries out modeling to connecting rod; Relate to the positive kinematics of robot, by the initial angle in each joint of robot, solve the transformation relation of the relative basis coordinates system of coordinate system in each joint; Define a right-handed coordinate system to each joint, the homogeneous transformation between the coordinate system in a joint and the coordinate system in next joint is called A matrix, A 1represent that first joint is relative at the position of basis coordinates system and attitude, so A 2then be expressed as second joint relative to the position in first joint and attitude, and second joint can be made up of following matrix product relative in the position of basis coordinates system and attitude:
T 2=A 1A 2(1)
By that analogy, the n-th joint relative to the formula in the position of basis coordinates system and attitude is:
T n = A 1 A 2 ... A n = q n 3 × 3 p n 3 × 1 0 1 - - - ( 2 )
In formula, represent the attitude in the n-th joint, represent the position of the n-th joint relative to basis coordinates system; Wherein A ican according to D-H Parametric Representation:
A i = cos ( θ i ) - sin ( θ i ) cos ( α i ) sin ( θ i ) sin ( α i ) a i cos ( θ i ) sin ( θ i ) cos ( θ i ) cos ( α i ) - cos ( θ i ) sin ( α i ) a i sin ( θ i ) 0 sin ( α i ) cos ( α i ) d i 0 0 0 1 - - - ( 3 )
θ i, d i, a i, α ifor the D-H parameter of joint of robot i;
Be the coordinate system of model with robot basis coordinates when building the robot cylinder model of virtual scene, solve the position of each joint relative to basis coordinates system connecting rod between adjacent segment adopts right cylinder to carry out modeling, and the cylindrical upper bottom surface center of circle is the position of two joint point, and cylinder radius adjusts according to actual conditions, builds the robot cylinder model of 6DOF;
2) by the RGB image of Kinect Real-time Obtaining real scene and the skeleton data of operator, and operator's cylinder model of virtual scene is built, as follows:
Operator, when entering into the operation interval of robot, by the RGB image of Kinect Real-time Obtaining real scene be fixed on before operator and the skeleton data of operator, realizes the tracking to operator and location; Kinect has three kinds of cameras: a kind of RGB colour imagery shot for gathering coloured image and two kinds of infrared cameras for sampling depth image;
For the acquisition of real scene RGB image, Kinect is put certain position in the environment, open coloured image NUI_INITIALIZE_FLAG_USES_COLOR and carry out initialization Kinect, by the coloured image frame data obtained, draw out with OpenCV;
For the acquisition of skeleton data, open skeleton data NUI_INITIALIZE_FLAG_USES_SKELETON and carry out initialization Kinect, when people is in standing state, Kinect can get the position of 20 articulation points of people to represent the skeleton of people; Extract the cylinder model that 15 articulation points build operator in virtual scene, these 15 articulation points sort from top to bottom and from left to right and are: 1. head; ②Jian center; 3. right shoulder; 4. right hand elbow; 5. the right hand; 6. left shoulder; 7. left hand elbow; 8. left hand; 9. hip joint center; 10. right hip; right knee; right crus of diaphragm; left hip; left knee; left foot; The position of these articulation points is all the position relative to Kinect coordinate system;
Being the coordinate system of model with Kinect coordinate when building operator's cylinder model, by the process of Kinect to depth image, obtaining the position of 15 articulation points of operator adopt right cylinder to carry out modeling to articulation point adjacent in human skeleton, the cylindrical upper bottom surface center of circle is the position of two joint point, and cylinder radius adjusts according to actual conditions;
3) in real scene and virtual scene, the demarcation of robot and operator is corresponding, as follows:
In above-mentioned two steps, be the coordinate system of model with robot basis coordinates to the modeling of robot in virtual scene, be the coordinate system of model with Kinect coordinate to the modeling of operator in virtual scene, in order to virtual scene and real scene are carried out correspondence, choose a coordinate system in real scene, be called world coordinate system E1, basis coordinates system of robot is called coordinate system E2, and Kinect coordinate system is called coordinate system E3;
Relation between robot basis coordinates system E2 and world coordinate system E1 rotation matrix R and translation matrix T represent, if the coordinate of certain articulation point P of robot under basis coordinates system of robot is coordinate under world coordinate system is so the pass between them is:
X p E 1 Y p E 1 Z p E 1 1 = R E 2 E 1 T E 2 E 1 0 1 · X p E 2 Y p E 2 Z p E 2 1 = M 2 · X p E 2 Y p E 2 Z p E 2 1 - - - ( 4 )
In formula, e2r e1for 3X3 matrix, represent the attitudes vibration matrix of basis coordinates system of robot relative to world coordinate system; e2t e1for 3X1 matrix, represent the change in location matrix of basis coordinates system of robot relative to world coordinate system; M 2for 4X4 matrix, represent the pose transformation matrices of basis coordinates system of robot relative to world coordinate system;
In like manner, the relation between Kinect coordinate system E3 and world coordinate system E1 also represents by rotation matrix R and translation matrix T, if the coordinate of certain articulation point P ' of operator under Kinect coordinate system is coordinate under world coordinate system is pass then between them is:
X p ′ E 1 Y p ′ E 1 Z p ′ E 1 1 = R E 3 E 1 T E 3 E 1 0 1 · X p ′ E 3 Y p ′ E 3 Z p ′ E 3 1 = M 3 · X p ′ E 3 Y p ′ E 3 Z p ′ E 3 1 - - - ( 5 )
After basis coordinates system of robot and Kinect coordinate system are all transformed into world coordinate system, be next exactly the mapping relations determining real scene and virtual scene and the error weighed between them; If certain 1 P in virtual scene vcoordinate be then this P rcoordinate in real scene RGB image is then there is one and map f, make to put P arbitrarily in virtual scene vand the some P in corresponding real scene r, there is following relation:
P r=f(P v)+ε(6)
In formula, ε is the error that virtual scene and real scene are corresponding;
Kinect is obtained to bone three-dimensional data and the coloured image 2-D data of operator, KinectSDK provides mutual conversion between the two, namely provides the mapping f of virtual scene operator and real scene operator person; NuiTransformSkeletonToDepthImage provides the mapping relationship f of three-dimensional skeleton data to two-dimensional depth image 1, NuiImageGetColorPixelCoordinatesFromDepthPixel provides the mapping f of two-dimensional depth image to Two-dimensional Color Image 2, the mapping relations of three-dimensional skeleton data to Two-dimensional Color Image can be learnt thus:
f person=f 2·f 1(7)
So far, the operator of operator's real scene of virtual scene achieves correspondence, for robot, because basis coordinates system of robot and Kinect coordinate system are transformed into world coordinate system all, then robot and the virtual scene residing for operator are the same with the coordinate system of real scene, then the mapping relationship f of virtual scene robot and real scene robot robotwith the mapping relationship f of virtual scene operator and real scene operator personthe same:
f robot=f person(8)
Then for whole virtual scene and real scene, the formula of the mapping relationship f between them is:
f=f robot=f person=f 2·f 1(9)
In conjunction with formula (6) and formula (9) above, the error ε of virtual scene and real scene can be weighed:
ε=P r-f 2·f 1(P v)(10)
4) real time collision detection of robot model and operator's model is as follows:
When operator enters in the operating environment of robot, the information of operator has unpredictability for robot, directly can have influence on efficiency and the precision of collision detection to the modeling strategy of operator and robot; Adopt more regular geometric configuration right cylinder to carry out modeling to robot and operator per capita to operator and machine, the collision detection concrete scheme based on right cylinder bounding box is as follows:
If A, B be a cylinder on to go to the bottom the round heart, C, D be another cylinder on to go to the bottom the round heart, space line l aB, l cDbe the axis of two cylinders, the main situation considering two straight line antarafacials, asks for two different surface beeline l aB, l cDcommon vertical line and intersection point P, Q, if P or Q be not in line segment AB or line segment CD, then carry out Boundary Detection, the end points chosen near other straight line substitutes P or Q point;
If P, Q are all in line segment AB and line segment CD, then two cylinders can only be that side is intersected, and now only need to judge the length of line segment PQ and the radius sum of two cylinders;
If there is a point to drop in line segment in P, Q, a point drops on end points, then two cylinders can only side and end face intersect, and now finds the bus near end face on side, judges that whether bus is crossing with end face;
If P, Q are all on the end points of line segment, then two cylinders can only be that end face intersects, and now only need to judge spatially whether two discs intersect.
Compared with prior art, tool has the following advantages and beneficial effect in the present invention:
The inventive method adopts more regular geometry shape right cylinder to carry out modeling to robot and operator, has good versatility and applicability, and right cylinder modeling simultaneously can improve efficiency and the precision of collision detection to a certain extent.Make operator can free movement in robot manipulating task space, the information of the operator that robot can obtain according to Kinect carry out initiatively keep away barrier, meet operation requirements.
Accompanying drawing explanation
Fig. 1 is the system construction drawing of the inventive method.
Fig. 2 is 6DOF robot cylinder model figure.
Fig. 3 is operator's cylinder model figure.
Embodiment
Below in conjunction with specific embodiment, the invention will be further described.
Intelligent robot under man-machine collaboration scene described in the present embodiment hides people's class methods, make operator can free movement in robot manipulating task space, robot according to Kinect obtain the information of operator carry out initiatively keep away barrier, its system construction drawing is as shown in Figure 1.The key of the inventive method is the collision checking method building the virtual machine human model corresponding with true man-machine collaboration scene and operator's model and robot model and operator's model, and it comprises the following steps:
1) the robot cylinder model of virtual scene is built by robot D-H parameter, as follows:
Suppose that robot is made up of with arbitrary form a series of connecting rod and joint, D-H Parametric Representation be relation between joint of robot and connecting rod.In order to carry out modeling to robot, we adopt and a little carry out modeling to joint, and right cylinder carries out modeling to connecting rod.Relate to the positive kinematics of robot in the present invention, by the initial angle in each joint of robot, solve the transformation relation of the relative basis coordinates system of coordinate system in each joint.Such as: define a right-handed coordinate system to each joint, the homogeneous transformation between the coordinate system in a joint and the coordinate system in next joint is called A matrix.A 1represent that first joint is relative at the position of basis coordinates system and attitude, so A 2then be expressed as second joint relative to the position in first joint and attitude, and second joint can be made up of following matrix product relative in the position of basis coordinates system and attitude:
T 2=A 1A 2(1)
By that analogy, the n-th joint relative to the formula in the position of basis coordinates system and attitude is:
T n = A 1 A 2 ... A n = q n 3 × 3 p n 3 × 1 0 1 - - - ( 2 )
In formula, represent the attitude in the n-th joint, represent the position of the n-th joint relative to basis coordinates system; Wherein A ican according to D-H Parametric Representation:
A i = cos ( θ i ) - sin ( θ i ) cos ( α i ) sin ( θ i ) sin ( α i ) a i cos ( θ i ) sin ( θ i ) cos ( θ i ) cos ( α i ) - cos ( θ i ) sin ( α i ) a i sin ( θ i ) 0 sin ( α i ) cos ( α i ) d i 0 0 0 1 - - - ( 3 )
θ i, d i, a i, α ifor the D-H parameter of joint of robot i;
Be the coordinate system of model with robot basis coordinates when the present invention builds the robot cylinder model of virtual scene, solve the position of each joint relative to basis coordinates system connecting rod between adjacent segment adopts right cylinder to carry out modeling, and the cylindrical upper bottom surface center of circle is the position of two joint point, and cylinder radius adjusts according to actual conditions, and the robot cylinder model of the 6DOF of structure as shown in Figure 2.
2) by the RGB image of Kinect Real-time Obtaining real scene and the skeleton data of operator, and operator's cylinder model of virtual scene is built, as follows:
Operator, when entering into the operation interval of robot, by the RGB image of Kinect Real-time Obtaining real scene be fixed on before operator and the skeleton data of operator, realizes the tracking to operator and location.Kinect has three kinds of cameras: a kind of RGB colour imagery shot for gathering coloured image and two kinds of infrared cameras for sampling depth image.
For the acquisition of real scene RGB image, Kinect is put certain position in the environment, open coloured image NUI_INITIALIZE_FLAG_USES_COLOR and carry out initialization Kinect, by the coloured image frame data obtained, draw out with OpenCV.
For the acquisition of skeleton data, open skeleton data NUI_INITIALIZE_FLAG_USES_SKELETON and carry out initialization Kinect, when people is in standing state, Kinect can get the position of 20 articulation points of people to represent the skeleton of people.In the present invention, extract the cylinder model that 15 articulation points build operator in virtual scene, these 15 articulation points sort from top to bottom and from left to right and are: 1. head; ②Jian center; 3. right shoulder; 4. right hand elbow; 5. the right hand; 6. left shoulder; 7. left hand elbow; 8. left hand; 9. hip joint center; 10. right hip; right knee; right crus of diaphragm; left hip; left knee; left foot; The position of these articulation points is all the position relative to Kinect coordinate system.
The present invention is the coordinate system of model with Kinect coordinate when building operator's cylinder model, by the process of Kinect to depth image, obtains the position of 15 articulation points of operator adopt right cylinder to carry out modeling to articulation point adjacent in human skeleton, the cylindrical upper bottom surface center of circle is the position of two joint point, and cylinder radius adjusts according to actual conditions, and operator's cylinder model of structure as shown in Figure 3.
3) in real scene and virtual scene, the demarcation of robot and operator is corresponding, as follows:
In above-mentioned two steps, be the coordinate system of model with robot basis coordinates to the modeling of robot in virtual scene, be the coordinate system of model with Kinect coordinate to the modeling of operator in virtual scene, in order to virtual scene and real scene are carried out correspondence, choose a coordinate system in real scene, be called world coordinate system E1, basis coordinates system of robot is called coordinate system E2, and Kinect coordinate system is called coordinate system E3.
Relation between robot basis coordinates system E2 and world coordinate system E1 can represent by rotation matrix R and translation matrix T, and such as, the coordinate of certain articulation point P of robot under basis coordinates system of robot is coordinate under world coordinate system is so the pass between them is:
X p E 1 Y p E 1 Z p E 1 1 = R E 2 E 1 T E 2 E 1 0 1 · X p E 2 Y p E 2 Z p E 2 1 = M 2 · X p E 2 Y p E 2 Z p E 2 1 - - - ( 4 )
In formula, e2r e1for 3X3 matrix, represent the attitudes vibration matrix of basis coordinates system of robot relative to world coordinate system; e2t e1for 3X1 matrix, represent the change in location matrix of basis coordinates system of robot relative to world coordinate system; M 2for 4X4 matrix, represent the pose transformation matrices of basis coordinates system of robot relative to world coordinate system.
In like manner, the relation between Kinect coordinate system E3 and world coordinate system E1 also can represent by rotation matrix R and translation matrix T, and such as, the coordinate of certain articulation point P ' of operator under Kinect coordinate system is coordinate under world coordinate system is pass then between them is:
X p ′ E 1 Y p ′ E 1 Z p ′ E 1 1 = R E 3 E 1 T E 3 E 1 0 1 · X p ′ E 3 Y p ′ E 3 Z p ′ E 3 1 = M 3 · X p ′ E 3 Y p ′ E 3 Z p ′ E 3 1 - - - ( 5 )
After basis coordinates system of robot and Kinect coordinate system are all transformed into world coordinate system, be next exactly the mapping relations determining real scene and virtual scene and the error weighed between them.Such as, if certain 1 P in virtual scene vcoordinate be then this P rcoordinate in real scene RGB image is then there is one and map f, make to put P arbitrarily in virtual scene vand the some P in corresponding real scene r, there is following relation:
P r=f(P v)+ε(6)
In formula, ε is the error that virtual scene and real scene are corresponding.
Kinect is obtained to bone three-dimensional data and the coloured image 2-D data of operator, KinectSDK provides mutual conversion between the two, namely provides the mapping f of virtual scene operator and real scene operator person; NuiTransformSkeletonToDepthImage provides the mapping relationship f of three-dimensional skeleton data to two-dimensional depth image 1, NuiImageGetColorPixelCoordinatesFromDepthPixel provides the mapping f of two-dimensional depth image to Two-dimensional Color Image 2, the mapping relations of three-dimensional skeleton data to Two-dimensional Color Image can be learnt thus:
f person=f 2·f 1(7)
So far, the operator of operator's real scene of virtual scene achieves correspondence, for robot, because basis coordinates system of robot and Kinect coordinate system are transformed into world coordinate system all, then robot and the virtual scene residing for operator are the same with the coordinate system of real scene, then the mapping relationship f of virtual scene robot and real scene robot robotwith the mapping relationship f of virtual scene operator and real scene operator personthe same:
f robot=f person(8)
Then for whole virtual scene and real scene, the formula of the mapping relationship f between them is:
f=f robot=f person=f 2·f 1(9)
In conjunction with formula (6) and formula (9) above, the error ε of virtual scene and real scene can be weighed:
ε=P r-f 2·f 1(P v)(10)
4) real time collision detection of robot model and operator's model is as follows:
When operator enters in the operating environment of robot, the information of operator has unpredictability for robot, directly can have influence on efficiency and the precision of collision detection to the modeling strategy of operator and robot.In the present invention, adopt more regular geometric configuration right cylinder to carry out modeling to robot and operator per capita to operator and machine, the collision detection concrete scheme based on right cylinder bounding box is as follows:
If A, B be a cylinder on to go to the bottom the round heart, C, D be another cylinder on to go to the bottom the round heart, space line l aB, l cDbe the axis of two cylinders, the main situation considering two straight line antarafacials, asks for two different surface beeline l aB, l cDcommon vertical line and intersection point P, Q, if P or Q be not in line segment AB or line segment CD, then carry out Boundary Detection, the end points chosen near other straight line substitutes P or Q point.
If P, Q are all in line segment AB and line segment CD, then two cylinders can only be that side is intersected, and now only need to judge the length of line segment PQ and the radius sum of two cylinders.
If there is a point to drop in line segment in P, Q, a point drops on end points, then two cylinders can only side and end face intersect, and now finds the bus near end face on side, judges that whether bus is crossing with end face.
If P, Q are all on the end points of line segment, then two cylinders can only be that end face intersects, and now only need to judge spatially whether two discs intersect.
The examples of implementation of the above are only the preferred embodiment of the present invention, not limit practical range of the present invention with this, therefore the change that all shapes according to the present invention, principle are done, all should be encompassed in protection scope of the present invention.

Claims (1)

1. the intelligent robot under man-machine collaboration scene hides people's class methods, it is characterized in that, comprises the following steps:
1) the robot cylinder model of virtual scene is built by robot D-H parameter, as follows:
Suppose that robot is made up of with arbitrary form a series of connecting rod and joint, D-H Parametric Representation be relation between joint of robot and connecting rod; In order to carry out modeling to robot, adopt point to carry out modeling to joint, right cylinder carries out modeling to connecting rod; Relate to the positive kinematics of robot, by the initial angle in each joint of robot, solve the transformation relation of the relative basis coordinates system of coordinate system in each joint; Define a right-handed coordinate system to each joint, the homogeneous transformation between the coordinate system in a joint and the coordinate system in next joint is called A matrix, A 1represent that first joint is relative at the position of basis coordinates system and attitude, so A 2then be expressed as second joint relative to the position in first joint and attitude, and second joint can be made up of following matrix product relative in the position of basis coordinates system and attitude:
T 2=A 1A 2(1)
By that analogy, the n-th joint relative to the formula in the position of basis coordinates system and attitude is:
T n = A 1 A 2 ... A n = q n 3 × 3 p n 3 × 1 0 1 - - - ( 2 )
In formula, represent the attitude in the n-th joint, represent the position of the n-th joint relative to basis coordinates system; Wherein A ican according to D-H Parametric Representation:
A i = cos ( θ i ) - sin ( θ i ) cos ( α i ) sin ( θ i ) sin ( α i ) a i cos ( θ i ) sin ( θ i ) cos ( θ i ) cos ( α i ) - cos ( θ i ) sin ( α i ) a i sin ( θ i ) 0 sin ( α i ) cos ( α i ) d i 0 0 0 1 - - - ( 3 )
θ i, d i, a i, α ifor the D-H parameter of joint of robot i;
Be the coordinate system of model with robot basis coordinates when building the robot cylinder model of virtual scene, solve the position of each joint relative to basis coordinates system connecting rod between adjacent segment adopts right cylinder to carry out modeling, and the cylindrical upper bottom surface center of circle is the position of two joint point, and cylinder radius adjusts according to actual conditions, builds the robot cylinder model of 6DOF;
2) by the RGB image of Kinect Real-time Obtaining real scene and the skeleton data of operator, and operator's cylinder model of virtual scene is built, as follows:
Operator, when entering into the operation interval of robot, by the RGB image of Kinect Real-time Obtaining real scene be fixed on before operator and the skeleton data of operator, realizes the tracking to operator and location; Kinect has three kinds of cameras: a kind of RGB colour imagery shot for gathering coloured image and two kinds of infrared cameras for sampling depth image;
For the acquisition of real scene RGB image, Kinect is put certain position in the environment, open coloured image NUI_INITIALIZE_FLAG_USES_COLOR and carry out initialization Kinect, by the coloured image frame data obtained, draw out with OpenCV;
For the acquisition of skeleton data, open skeleton data NUI_INITIALIZE_FLAG_USES_SKELETON and carry out initialization Kinect, when people is in standing state, Kinect can get the position of 20 articulation points of people to represent the skeleton of people; Extract the cylinder model that 15 articulation points build operator in virtual scene, these 15 articulation points sort from top to bottom and from left to right and are: 1. head; ②Jian center; 3. right shoulder; 4. right hand elbow; 5. the right hand; 6. left shoulder; 7. left hand elbow; 8. left hand; 9. hip joint center; 10. right hip; right knee; right crus of diaphragm; left hip; left knee; left foot; The position of these articulation points is all the position relative to Kinect coordinate system;
Being the coordinate system of model with Kinect coordinate when building operator's cylinder model, by the process of Kinect to depth image, obtaining the position of 15 articulation points of operator adopt right cylinder to carry out modeling to articulation point adjacent in human skeleton, the cylindrical upper bottom surface center of circle is the position of two joint point, and cylinder radius adjusts according to actual conditions;
3) in real scene and virtual scene, the demarcation of robot and operator is corresponding, as follows:
In above-mentioned two steps, be the coordinate system of model with robot basis coordinates to the modeling of robot in virtual scene, be the coordinate system of model with Kinect coordinate to the modeling of operator in virtual scene, in order to virtual scene and real scene are carried out correspondence, choose a coordinate system in real scene, be called world coordinate system E1, basis coordinates system of robot is called coordinate system E2, and Kinect coordinate system is called coordinate system E3;
Relation between robot basis coordinates system E2 and world coordinate system E1 rotation matrix R and translation matrix T represent, if the coordinate of certain articulation point P of robot under basis coordinates system of robot is coordinate under world coordinate system is so the pass between them is:
X p E 1 Y p E 1 Z p E 1 1 = R E 2 E 1 T E 2 E 1 0 1 · X p E 2 Y p E 2 Z p E 2 1 = M 2 · X p E 2 Y p E 2 Z p E 2 1 - - - ( 4 )
In formula, e2r e1for 3X3 matrix, represent the attitudes vibration matrix of basis coordinates system of robot relative to world coordinate system; e2t e1for 3X1 matrix, represent the change in location matrix of basis coordinates system of robot relative to world coordinate system; M 2for 4X4 matrix, represent the pose transformation matrices of basis coordinates system of robot relative to world coordinate system;
In like manner, the relation between Kinect coordinate system E3 and world coordinate system E1 also represents by rotation matrix R and translation matrix T, if the coordinate of certain articulation point P ' of operator under Kinect coordinate system is coordinate under world coordinate system is pass then between them is:
X p ′ E 1 Y p ′ E 1 Z p ′ E 1 1 = R E 3 E 1 T E 3 E 1 0 1 · X p ′ E 3 Y p ′ E 3 Z p ′ E 3 1 = M 3 · X p ′ E 3 Y p ′ E 3 Z p ′ E 3 1 - - - ( 5 )
After basis coordinates system of robot and Kinect coordinate system are all transformed into world coordinate system, be next exactly the mapping relations determining real scene and virtual scene and the error weighed between them; If certain 1 P in virtual scene vcoordinate be then this P rcoordinate in real scene RGB image is then there is one and map f, make to put P arbitrarily in virtual scene vand the some P in corresponding real scene r, there is following relation:
P r=f(P v)+ε(6)
In formula, ε is the error that virtual scene and real scene are corresponding;
Kinect is obtained to bone three-dimensional data and the coloured image 2-D data of operator, KinectSDK provides mutual conversion between the two, namely provides the mapping f of virtual scene operator and real scene operator person; NuiTransformSkeletonToDepthImage provides the mapping relationship f of three-dimensional skeleton data to two-dimensional depth image 1, NuiImageGetColorPixelCoordinatesFromDepthPixel provides the mapping f of two-dimensional depth image to Two-dimensional Color Image 2, the mapping relations of three-dimensional skeleton data to Two-dimensional Color Image can be learnt thus:
f person=f 2·f 1(7)
So far, the operator of operator's real scene of virtual scene achieves correspondence, for robot, because basis coordinates system of robot and Kinect coordinate system are transformed into world coordinate system all, then robot and the virtual scene residing for operator are the same with the coordinate system of real scene, then the mapping relationship f of virtual scene robot and real scene robot robotwith the mapping relationship f of virtual scene operator and real scene operator personthe same:
f robot=f person(8)
Then for whole virtual scene and real scene, the formula of the mapping relationship f between them is:
f=f robot=f person=f 2·f 1(9)
In conjunction with formula (6) and formula (9) above, the error ε of virtual scene and real scene can be weighed:
ε=P r-f 2·f 1(P v)(10)
4) real time collision detection of robot model and operator's model is as follows:
When operator enters in the operating environment of robot, the information of operator has unpredictability for robot, directly can have influence on efficiency and the precision of collision detection to the modeling strategy of operator and robot; Adopt more regular geometric configuration right cylinder to carry out modeling to robot and operator per capita to operator and machine, the collision detection concrete scheme based on right cylinder bounding box is as follows:
If A, B be a cylinder on to go to the bottom the round heart, C, D be another cylinder on to go to the bottom the round heart, space line l aB, l cDbe the axis of two cylinders, the main situation considering two straight line antarafacials, asks for two different surface beeline l aB, l cDcommon vertical line and intersection point P, Q, if P or Q be not in line segment AB or line segment CD, then carry out Boundary Detection, the end points chosen near other straight line substitutes P or Q point;
If P, Q are all in line segment AB and line segment CD, then two cylinders can only be that side is intersected, and now only need to judge the length of line segment PQ and the radius sum of two cylinders;
If there is a point to drop in line segment in P, Q, a point drops on end points, then two cylinders can only side and end face intersect, and now finds the bus near end face on side, judges that whether bus is crossing with end face;
If P, Q are all on the end points of line segment, then two cylinders can only be that end face intersects, and now only need to judge spatially whether two discs intersect.
CN201510518563.4A 2015-08-21 2015-08-21 A kind of intelligent robot under man-machine collaboration scene hides mankind's method Active CN105137973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510518563.4A CN105137973B (en) 2015-08-21 2015-08-21 A kind of intelligent robot under man-machine collaboration scene hides mankind's method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510518563.4A CN105137973B (en) 2015-08-21 2015-08-21 A kind of intelligent robot under man-machine collaboration scene hides mankind's method

Publications (2)

Publication Number Publication Date
CN105137973A true CN105137973A (en) 2015-12-09
CN105137973B CN105137973B (en) 2017-12-01

Family

ID=54723348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510518563.4A Active CN105137973B (en) 2015-08-21 2015-08-21 A kind of intelligent robot under man-machine collaboration scene hides mankind's method

Country Status (1)

Country Link
CN (1) CN105137973B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106078752A (en) * 2016-06-27 2016-11-09 西安电子科技大学 Method is imitated in a kind of anthropomorphic robot human body behavior based on Kinect
CN107564065A (en) * 2017-09-22 2018-01-09 东南大学 The measuring method of man-machine minimum range under a kind of Collaborative environment
CN108427331A (en) * 2018-03-30 2018-08-21 烟台维度机器人有限公司 A kind of man-machine collaboration safety protecting method and system
CN108527370A (en) * 2018-04-16 2018-09-14 北京卫星环境工程研究所 The man-machine co-melting safety control system of view-based access control model
CN108846891A (en) * 2018-05-30 2018-11-20 广东省智能制造研究所 A kind of man-machine safety collaboration method based on three-dimensional framework detection
CN109116992A (en) * 2018-08-31 2019-01-01 北京航空航天大学 A kind of collision response system for virtual hand force feedback interaction
CN109219856A (en) * 2016-03-24 2019-01-15 宝利根 T·R 有限公司 For the mankind and robot cooperated system and method
CN109465835A (en) * 2018-09-25 2019-03-15 华中科技大学 The safety predicting method in advance of both arms service robot operation under a kind of dynamic environment
CN109500811A (en) * 2018-11-13 2019-03-22 华南理工大学 A method of the mankind are actively avoided towards man-machine co-melting robot
CN109844672A (en) * 2016-08-24 2019-06-04 西门子股份公司 Method for testing autonomous system
CN110175523A (en) * 2019-04-26 2019-08-27 南京华捷艾米软件科技有限公司 A kind of self-movement robot animal identification and hide method and its storage medium
CN110640742A (en) * 2018-11-07 2020-01-03 宁波赛朗科技有限公司 Industrial robot platform of multi-mode control
CN111640175A (en) * 2018-06-21 2020-09-08 华为技术有限公司 Object modeling movement method, device and equipment
CN111735601A (en) * 2020-08-04 2020-10-02 中国空气动力研究与发展中心低速空气动力研究所 Wall collision prevention method for double-engine refueling wind tunnel test supporting device
WO2021027945A1 (en) * 2019-08-15 2021-02-18 纳恩博(常州)科技有限公司 Coordinate obtaining method and apparatus for movable device
CN113370210A (en) * 2021-06-23 2021-09-10 华北科技学院(中国煤矿安全技术培训中心) Robot active collision avoidance system and method
CN113733098A (en) * 2021-09-28 2021-12-03 武汉联影智融医疗科技有限公司 Mechanical arm model pose calculation method and device, electronic equipment and storage medium
WO2021248652A1 (en) * 2020-06-10 2021-12-16 南京英尼格玛工业自动化技术有限公司 Automatic welding gun trace relief method for high-speed rail bolster auxiliary hole
CN115496798A (en) * 2022-11-08 2022-12-20 中国电子科技集团公司第三十八研究所 Co-location method and system for tethered balloon equipment simulation training
CN115890671A (en) * 2022-11-17 2023-04-04 山东大学 SMPL parameter-based multi-geometry human body collision model generation method and system
WO2023217032A1 (en) * 2022-05-09 2023-11-16 苏州艾利特机器人有限公司 Robot collision detection method, storage medium and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103170973A (en) * 2013-03-28 2013-06-26 上海理工大学 Man-machine cooperation device and method based on Kinect video camera
CN103226387A (en) * 2013-04-07 2013-07-31 华南理工大学 Video fingertip positioning method based on Kinect
CN103399637A (en) * 2013-07-31 2013-11-20 西北师范大学 Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect
CN104570731A (en) * 2014-12-04 2015-04-29 重庆邮电大学 Uncalibrated human-computer interaction control system and method based on Kinect
CN104777775A (en) * 2015-03-25 2015-07-15 北京工业大学 Two-wheeled self-balancing robot control method based on Kinect device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103170973A (en) * 2013-03-28 2013-06-26 上海理工大学 Man-machine cooperation device and method based on Kinect video camera
CN103226387A (en) * 2013-04-07 2013-07-31 华南理工大学 Video fingertip positioning method based on Kinect
CN103399637A (en) * 2013-07-31 2013-11-20 西北师范大学 Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect
CN104570731A (en) * 2014-12-04 2015-04-29 重庆邮电大学 Uncalibrated human-computer interaction control system and method based on Kinect
CN104777775A (en) * 2015-03-25 2015-07-15 北京工业大学 Two-wheeled self-balancing robot control method based on Kinect device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
甘屹等: "基于给定工作空间的6R型机器人D_H参数优化设计", 《中国机械工程》 *
贺超等: "采用Kinect的移动机器人目标跟踪与避障", 《智能***学报》 *
郭发勇等: "D_H法建立连杆坐标系存在的问题及改进", 《中国机械工程》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109219856A (en) * 2016-03-24 2019-01-15 宝利根 T·R 有限公司 For the mankind and robot cooperated system and method
CN106078752B (en) * 2016-06-27 2019-03-19 西安电子科技大学 A kind of anthropomorphic robot human body behavior imitation method based on Kinect
CN106078752A (en) * 2016-06-27 2016-11-09 西安电子科技大学 Method is imitated in a kind of anthropomorphic robot human body behavior based on Kinect
CN109844672B (en) * 2016-08-24 2022-08-12 西门子股份公司 Method for testing autonomous systems
US11556118B2 (en) 2016-08-24 2023-01-17 Siemens Aktiengesellschaft Method for testing an autonomous system
CN109844672A (en) * 2016-08-24 2019-06-04 西门子股份公司 Method for testing autonomous system
CN107564065B (en) * 2017-09-22 2019-10-22 东南大学 The measuring method of man-machine minimum range under a kind of Collaborative environment
CN107564065A (en) * 2017-09-22 2018-01-09 东南大学 The measuring method of man-machine minimum range under a kind of Collaborative environment
CN108427331A (en) * 2018-03-30 2018-08-21 烟台维度机器人有限公司 A kind of man-machine collaboration safety protecting method and system
CN108527370A (en) * 2018-04-16 2018-09-14 北京卫星环境工程研究所 The man-machine co-melting safety control system of view-based access control model
CN108846891A (en) * 2018-05-30 2018-11-20 广东省智能制造研究所 A kind of man-machine safety collaboration method based on three-dimensional framework detection
CN108846891B (en) * 2018-05-30 2023-04-28 广东省智能制造研究所 Man-machine safety cooperation method based on three-dimensional skeleton detection
US11436802B2 (en) 2018-06-21 2022-09-06 Huawei Technologies Co., Ltd. Object modeling and movement method and apparatus, and device
CN111640175A (en) * 2018-06-21 2020-09-08 华为技术有限公司 Object modeling movement method, device and equipment
CN109116992A (en) * 2018-08-31 2019-01-01 北京航空航天大学 A kind of collision response system for virtual hand force feedback interaction
CN109116992B (en) * 2018-08-31 2020-12-04 北京航空航天大学 Collision response system for virtual hand force feedback interaction
CN109465835A (en) * 2018-09-25 2019-03-15 华中科技大学 The safety predicting method in advance of both arms service robot operation under a kind of dynamic environment
CN110640742A (en) * 2018-11-07 2020-01-03 宁波赛朗科技有限公司 Industrial robot platform of multi-mode control
CN109500811A (en) * 2018-11-13 2019-03-22 华南理工大学 A method of the mankind are actively avoided towards man-machine co-melting robot
CN110175523A (en) * 2019-04-26 2019-08-27 南京华捷艾米软件科技有限公司 A kind of self-movement robot animal identification and hide method and its storage medium
CN110175523B (en) * 2019-04-26 2021-05-14 南京华捷艾米软件科技有限公司 Self-moving robot animal identification and avoidance method and storage medium thereof
WO2021027945A1 (en) * 2019-08-15 2021-02-18 纳恩博(常州)科技有限公司 Coordinate obtaining method and apparatus for movable device
WO2021248652A1 (en) * 2020-06-10 2021-12-16 南京英尼格玛工业自动化技术有限公司 Automatic welding gun trace relief method for high-speed rail bolster auxiliary hole
CN111735601A (en) * 2020-08-04 2020-10-02 中国空气动力研究与发展中心低速空气动力研究所 Wall collision prevention method for double-engine refueling wind tunnel test supporting device
CN113370210A (en) * 2021-06-23 2021-09-10 华北科技学院(中国煤矿安全技术培训中心) Robot active collision avoidance system and method
CN113733098A (en) * 2021-09-28 2021-12-03 武汉联影智融医疗科技有限公司 Mechanical arm model pose calculation method and device, electronic equipment and storage medium
CN113733098B (en) * 2021-09-28 2023-03-03 武汉联影智融医疗科技有限公司 Mechanical arm model pose calculation method and device, electronic equipment and storage medium
WO2023217032A1 (en) * 2022-05-09 2023-11-16 苏州艾利特机器人有限公司 Robot collision detection method, storage medium and electronic device
CN115496798A (en) * 2022-11-08 2022-12-20 中国电子科技集团公司第三十八研究所 Co-location method and system for tethered balloon equipment simulation training
CN115890671A (en) * 2022-11-17 2023-04-04 山东大学 SMPL parameter-based multi-geometry human body collision model generation method and system

Also Published As

Publication number Publication date
CN105137973B (en) 2017-12-01

Similar Documents

Publication Publication Date Title
CN105137973A (en) Method for robot to intelligently avoid human under man-machine cooperation scene
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
CN106826833B (en) Autonomous navigation robot system based on 3D (three-dimensional) stereoscopic perception technology
CN110706248B (en) Visual perception mapping method based on SLAM and mobile robot
CN103170973B (en) Man-machine cooperation device and method based on Kinect video camera
CN102880866B (en) Method for extracting face features
CN101604447B (en) No-mark human body motion capture method
CN101511550B (en) Method for observation of person in industrial environment
CN103049912B (en) Random trihedron-based radar-camera system external parameter calibration method
CN102982557B (en) Method for processing space hand signal gesture command based on depth camera
Tang et al. 3D mapping and 6D pose computation for real time augmented reality on cylindrical objects
Kropatsch et al. Digital image analysis: selected techniques and applications
CN104570731A (en) Uncalibrated human-computer interaction control system and method based on Kinect
CN108838991A (en) It is a kind of from main classes people tow-armed robot and its to the tracking operating system of moving target
CN102800126A (en) Method for recovering real-time three-dimensional body posture based on multimodal fusion
CN103714322A (en) Real-time gesture recognition method and device
CN107030692B (en) Manipulator teleoperation method and system based on perception enhancement
CN105014677A (en) Visual mechanical arm control device and method based on Camshift visual tracking and D-H modeling algorithms
CN104036488A (en) Binocular vision-based human body posture and action research method
CN108846891B (en) Man-machine safety cooperation method based on three-dimensional skeleton detection
CN106625658A (en) Method for controlling anthropomorphic robot to imitate motions of upper part of human body in real time
CN107564065A (en) The measuring method of man-machine minimum range under a kind of Collaborative environment
Munkelt et al. A model driven 3D image interpretation system applied to person detection in video images
CN109214295B (en) Gesture recognition method based on data fusion of Kinect v2 and Leap Motion
CN115810188A (en) Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant