CN111360826A - System capable of displaying grabbing pose in real time - Google Patents

System capable of displaying grabbing pose in real time Download PDF

Info

Publication number
CN111360826A
CN111360826A CN202010132892.6A CN202010132892A CN111360826A CN 111360826 A CN111360826 A CN 111360826A CN 202010132892 A CN202010132892 A CN 202010132892A CN 111360826 A CN111360826 A CN 111360826A
Authority
CN
China
Prior art keywords
mechanical arm
grabbing
computer
camera
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010132892.6A
Other languages
Chinese (zh)
Other versions
CN111360826B (en
Inventor
庞剑坤
魏武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010132892.6A priority Critical patent/CN111360826B/en
Publication of CN111360826A publication Critical patent/CN111360826A/en
Application granted granted Critical
Publication of CN111360826B publication Critical patent/CN111360826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a system capable of displaying a grabbing pose in real time, which comprises a mechanical arm part, a camera part, a target object and a computer, wherein the mechanical arm part is connected with the camera part; the mechanical arm part comprises a six-degree-of-freedom mechanical arm and two-finger clamping jaws, and the six-degree-of-freedom mechanical arm is connected with the two-finger clamping jaws; the camera part comprises a depth camera, and the camera is connected with a computer; the computer comprises an algorithm processing unit, and the algorithm processing unit is used for calculating the grabbing poses of the mechanical arm, the camera and the target object; the target object is placed below the camera, the mechanical arm part moves from top to bottom along a track, in the process, the camera acquires a depth image of the target object and sends the depth image to the computer, an information entropy diagram is obtained through processing of an algorithm processing unit of the computer, and a change diagram of the optimal grabbing pose of the target object, the depth image acquired from the camera and the information entropy diagram are displayed on a display screen of the computer.

Description

System capable of displaying grabbing pose in real time
Technical Field
The invention relates to the field of mechanical arm visual grabbing, in particular to a system capable of displaying grabbing pose in real time.
Background
In recent years, the visual grabbing of the mechanical arm is gradually a research hotspot, and related applications are gradually brought to the market. Most of the existing mechanical arm grabbing systems rely on high-performance hardware devices, such as multi-core processors, display cards with sufficiently large display memories, and the like. Such a visual capture system relying on conventional image recognition is difficult to land in a real scene. For example, the high performance video card TITAN used in Deep Object timing for magnetic recording of Household objects published by NVIDIA is very expensive. On the one hand, the high price and the demanding hardware requirements limit their spread. On the other hand, the real-time requirement in grabbing can not be met by adopting general hardware equipment, and generally the system can only grab static objects, and is long in time consumption and low in efficiency.
The system capable of displaying the grabbing pose in real time can solve the existing problems. The invention completely expounds the components of the System, the communication mode among the components and the specific processing method, obtains the depth information of the target object from multiple visual angles in the downward grabbing process of the mechanical arm, generates the optimal grabbing pose of the target object, and displays the optimal grabbing pose in real time, thereby realizing the aim of dynamic grabbing, greatly shortening the processing time from the input of information to the decision obtaining, being very efficient and effective, having great reference value for a Robot Operating System (ROS) and an environment perception System, and being capable of being popularized in the field of industrial robots.
Disclosure of Invention
The invention provides a system capable of displaying a grabbing pose in real time, which mainly solves the problems that the existing algorithm has high requirements on hardware equipment, cannot display the grabbing pose in real time and the like, completely expounds the components of the system, the communication mode among all the components and the specific processing method, shortens the calculation time and realizes the function of displaying the optimal grabbing pose in real time by a method with high operation efficiency of a grid map.
The invention is realized by at least one of the following technical schemes.
A system capable of displaying a grabbing pose in real time comprises a mechanical arm part, a camera part, a target object and a computer;
the mechanical arm part comprises a six-degree-of-freedom mechanical arm and two-finger clamping jaws, the six-degree-of-freedom mechanical arm is connected with the two-finger clamping jaws, and the two-finger clamping jaws are arranged at the tail end of the six-degree-of-freedom mechanical arm;
the camera part comprises a depth camera, and the camera is connected with a computer; the depth camera is arranged right above the two fingers;
the computer comprises an algorithm processing unit, and the algorithm processing unit is used for calculating the grabbing poses of the mechanical arm, the camera and the target object;
the target object is placed below the depth camera, the mechanical arm part moves from top to bottom along a track, in the process, the depth camera acquires a depth image of the target object and sends the depth image to the computer, an information entropy diagram is obtained through processing of an algorithm processing unit of the computer, and a change diagram of the optimal grabbing pose of the target object, the depth image acquired from the depth camera and the information entropy diagram are displayed on a display screen of the computer.
Further, the six-degree-of-freedom mechanical arm is a UR5 industrial mechanical arm, and the two-finger clamping jaw is an RG2 clamping jaw.
Further, the depth camera is an Intel Realsense D435 i.
Further, the computer system used by the algorithm processing unit is Ubuntu 16.04, and the robot operating system is ROS Kinetic.
Further, the motion trajectory of the mechanical arm part is defined as follows:
p represents the three-dimensional position of the camera in the downward movement process of the mechanical arm according to a certain track;
k represents the number of a series of P points in the downward movement process of the mechanical arm according to a certain track;
Γ={p0,...,pk}: a random trajectory consisting of K P points;
p0the position of the camera before the mechanical arm starts to move is shown, and the corresponding vertical height is zmax
pkThe position of the camera before the mechanical arm starts to move is shown, and the corresponding vertical height is zmin
Further, after the depth camera is started by the computer, the depth image is sent to the computer, and the updating frequency is 80fps, so that the continuity of data transmission is ensured.
Further, the communication mode of the six-degree-of-freedom mechanical arm, the depth camera and the computer is communication under ROS, specifically, the computer is connected with the six-degree-of-freedom mechanical arm through a network cable, then the six-degree-of-freedom mechanical arm is started, and a control instruction is sent to the six-degree-of-freedom mechanical arm; the computer receives depth information acquired by the depth camera from the target object in real time, inputs the depth information into the internal algorithm processing unit, outputs the optimal grabbing pose of the target object after the depth information is processed by the algorithm processing unit, and displays the optimal grabbing pose through a screen of the computer so as to display the optimal grabbing pose of the target object in real time, wherein the time of algorithm processing is less than 0.5 s.
Further, the algorithm processing unit calculates the grabbing pose of the target object as defined below:
defining the grabbing pose of the target object: g ═ c, Φ, w, q) denotes the parameters involved in a complete gripping movement;
c ═ x, y, z) represents the three-dimensional coordinates of the target object gripping point, i.e. the target position that the gripping jaw needs to reach;
x, Y and Z respectively represent coordinates of X, Y and Z axes in a Cartesian coordinate system and are in mm;
phi ∈ [0, pi ] represents the angle that the clamping jaw needs to rotate to grab the target object;
w represents the width of the clamping jaw required to be opened for grabbing the target object, and the unit is mm;
q ∈ [0,1] indicates the quality of the grab, and the larger the default value, the higher the grab success rate.
In order to combine the observation along the viewpoint track at the time step, the working spaces of the six-degree-of-freedom mechanical arm and the two fingers are represented as two-dimensional grid graphs M of J x K units, and J and K respectively represent the length and the width of the two-dimensional grid graphs M; each unit corresponds to a u x u physical area and serves as a unit square grid, and u represents the size of the unit square grid;
in each cell (j, k), corresponding to a unit cell of u, (j, k) representing a physical region of j k, j, k representing the length and width of the region, respectively, a gripping quality observation (q) is added to a vector qj,kIn, discretizing into nqInterval, nqRepresenting grid diagram row coordinates, and combining the gripping quality and the angle observation value to form (q, phi) to be recorded in a two-dimensional histogram mj,kIn each case are separated by nq*nφInterval with abscissa of nφThe numerical value represents the rotation angle, and the column coordinate is nqThe value size represents the grabbing quality, the vector of the values represents the distribution of observation points at each point and forms the basis of information acquisition, the number in each square grid is defined in a grid graph to represent the probability, and the larger the number is, the information gain (n) in the square grid area is represented (the larger the number is, the higher the probability is)q,nφ) Determining whether the area contains the object according to the number condition in the grid map, wherein the grid with large number contains the object, and the grid with small number indicates that the object is not contained;
further, the grid map is obtained by digitally converting a depth image of the object, and the capture in the (j, k) region is defined by parameterizing the average of the observations in the region:
Figure BDA0002396272650000041
wherein, gj,kRepresents grabbing within the (j, k) region; c. Cj,kRepresenting the three-dimensional position of the target grabbing center point;
Figure BDA0002396272650000042
represents phij,kMean value of (phi)j,kRepresenting the angle of the object to be rotated when the two fingers grab in the (j, k) area;
Figure BDA0002396272650000043
represents wj,kMean value of (1), wj,kRepresenting the angle of the object to be gripped by the two fingers in the (j, k) area;
Figure BDA0002396272650000044
represents qj,kMean value of qj,kRepresenting the grabbing quality of the two fingers in the (j, k) area;
further, the average observed value is calculated as follows, and for a single cell, the average grab quality observed value q is given by:
Figure BDA0002396272650000045
wherein N isqRepresents nqThe set of (a) and (b),
Figure BDA0002396272650000046
representing a subscript of nqQ value of (1);
mean angle of rotation
Figure BDA0002396272650000047
Is the vector mean of the angle observations weighted by the corresponding grabbed mass observations:
Figure BDA0002396272650000051
Figure BDA0002396272650000052
Figure BDA0002396272650000053
wherein pi represents a grid diagram
Figure BDA0002396272650000054
The sum of the sine values of all the grabbing angles, psi represents the sum of the cosine values of all the grabbing angles, NqRepresents nqSet of (2), NφRepresents nφA set of (a);
average opening width of one unit
Figure BDA0002396272650000056
Average of n observations:
Figure BDA0002396272650000055
where n represents the number of values of w.
Compared with the prior art, the invention has the advantages and beneficial effects that:
1. the method completely expounds the components of the system, the communication mode among the components and the specific processing method, obtains the depth information of the target object from multiple visual angles in the downward grabbing process of the mechanical arm, generates the optimal grabbing pose of the target object, displays the optimal grabbing pose in real time, achieves the aim of dynamic grabbing, greatly shortens the processing time from information input to decision obtaining, and is efficient and effective.
2. The system components and the communication mode adopted by the invention can simplify the information transmission process, and meanwhile, the adopted gridding calculation method greatly shortens the time for calculating the optimal grabbing pose, does not need a processor and a display card with strong performance, can run on a common industrial personal computer or a notebook computer, and is convenient to popularize.
Drawings
Fig. 1 is a schematic diagram of a system capable of displaying a grabbing pose in real time according to the embodiment;
fig. 2 is a depth image map of the target object obtained in this embodiment;
FIG. 3 is a diagram of an optimal grabbing pose displayed in real time in this embodiment;
FIG. 4 is a grid diagram in the present embodiment;
FIG. 5 is a graph of the path of the jaw movement in this embodiment;
in the figure: 1-six degree of freedom mechanical arm; 2-a clamping jaw; 3-a depth camera; 4-a target object; 5-a computer.
Detailed Description
The working principle and working process of the present invention will be further explained in detail with reference to the accompanying drawings.
As shown in fig. 1, a system capable of displaying a grabbing pose in real time comprises a mechanical arm part, a camera part, a target object and a computer, wherein the mechanical arm part comprises a six-degree-of-freedom mechanical arm 1 and two-finger clamping jaws 2, and the two-finger clamping jaws 2 are arranged at the tail end of the six-degree-of-freedom mechanical arm 1;
the camera part comprises a depth camera 3, the camera 3 is connected with a computer 5, and the depth camera 3 is arranged right above the two fingers 2;
the target objects comprise a plurality of target objects 4 common to daily life;
the computer 5 comprises an algorithm processing unit which is used for calculating the grabbing poses of the mechanical arm, the camera and the target object;
as shown in fig. 1, the camera 3 acquires a depth image of the target object and sends the depth image to the computer 5, the information entropy diagram shown in fig. 3 is obtained through processing by the algorithm processing unit of the computer 5, and a variation diagram of the optimal grabbing pose (rectangle) of the target object is displayed on the display screen of the computer 5.
The six-degree-of-freedom mechanical arm 1 is a UR5 industrial mechanical arm, the two-finger clamping jaw 2 is an RG2 clamping jaw, the six-degree-of-freedom mechanical arm is connected with the two-finger clamping jaws, the depth camera 3 is an Intel Realsense D435i, a computer system used by the algorithm processing part is Ubuntu 16.04, and a robot operating system is ROS Kinetic;
specifically, a target object 4 is randomly placed right below a camera, a six-degree-of-freedom mechanical arm 1 drives two fingers of a clamping jaw 2 to move from top to bottom along a track, as shown in fig. 5, in the process, the camera 3 continuously obtains depth information of the target object from a plurality of different visual angles and transmits the depth image information to a computer 5, the computer 5 processes the obtained depth image information according to an algorithm and calculates an optimal grabbing pose of the target object, and meanwhile, an object depth map, an information entropy map after algorithm processing and an object optimal grabbing pose (rectangular) change map calculated by the algorithm are displayed on a computer screen of the computer 5, and the depth image is shown in fig. 2.
The mechanical arm part motion trajectory is defined as follows:
p represents the three-dimensional position of the camera in the downward movement process of the mechanical arm 1 according to a certain track;
k represents the number of a series of P points in the downward movement process of the mechanical arm 1 according to a certain track;
Γ={p0,...,pkdenotes a random trajectory composed of K P points;
p0the position of the camera before the mechanical arm starts to move is shown, and the corresponding vertical height is zmax
pkThe position of the camera before the mechanical arm starts to move is shown, and the corresponding vertical height is zmin
After the depth camera 3 is started by the computer 5, the corresponding depth image is obtained and sent to the computer 5, and the updating frequency is 80fps, so that the continuity of data transmission is ensured.
As shown in fig. 1, the Robot arm 1, the camera 3 and the computer 5 communicate with each other under a Robot Operating System (ROS), and the computer 5 is connected to the Robot arm 1 through a network cable, then starts the Robot arm 1, and transmits a series of information about the Robot arm 1 and a control command to the six-degree-of-freedom Robot arm 1. The computer 5 receives the information sent by the depth camera 3 in real time and inputs the information to the algorithm processing unit. The whole communication flow is as follows: through an ROS system, the mechanical arm 1 and the depth camera 3 respectively release relevant information, the depth camera 3 acquires depth information of a target object, namely the thickness of the object and the distance from the depth camera, and the depth information is released, the computer 5 always receives the relevant information, sends the depth information in the depth camera to an algorithm processing unit, outputs the optimal grabbing pose of the target object after algorithm processing, and sends the optimal grabbing pose to the computer 5 in a topic form, the computer 5 can start Rviz (a visualization tool carried in the ROS system) through the ROS to acquire the optimal grabbing pose of the target object and display the optimal grabbing pose through a screen of the computer 5, therefore, the optimal grabbing pose of the target object can be displayed in real time, and the time of algorithm processing is less than 0.5 s.
The algorithm processing unit calculates the grabbing pose of the target object as follows:
defining the grabbing pose of the target object: g ═ c, Φ, w, q) denotes the parameters involved in a complete gripping movement;
c ═ x, y, z denotes the three-dimensional coordinates of the target object's gripping point, i.e. the target position that the gripper needs to reach;
x, Y and Z respectively represent coordinates of X, Y and Z axes in a Cartesian coordinate system and are in mm;
phi ∈ [0, pi ] represents the angle that the clamping jaw needs to rotate to grab the target object;
w represents the width of the clamping jaw required to be opened for grabbing the target object, and the unit is mm;
q ∈ [0,1] represents the quality of grabbing, and the larger the stipulated value is, the higher the grabbing success rate is proved;
in order to combine the observations at the time step along the viewpoint trajectory, the working spaces of the six-degree-of-freedom robot arm and the two-finger gripper are represented as two-dimensional grid maps M of J × K units, J, K representing the length and width of the two-dimensional grid map M, respectively. Each unit corresponds to a u x u physical area and serves as a unit square grid, and u represents the size of the unit square grid;
as shown in fig. 4, in each cell (j, k), corresponding to a unit cell of u × (j, k) represents a physical region of j × k, j, k represent the length and width of the region, respectively, and the grip quality observation (q) is incorporated into a vector qj,kIn, discretizing into nqInterval, nqRepresenting grid diagram row coordinates, and combining the gripping quality and the angle observation value to form (q, phi) to be recorded in a two-dimensional histogram mj,kIn each case are separated by nq*nφInterval with abscissa of nφThe numerical value represents the rotation angle, and the column coordinate is nqThe value size represents the grasping quality. These vectors represent the distribution of the observation points at each point and form the basis of the information acquisition method, the grid map defines the probability that the number inside each square represents, the larger the number, the information gain (n) in the square area is representedq,nφ) The larger, the larger may be according to that in the grid mapThe number situation determines whether the area contains the object, the place with large number contains the object, and the other squares have small number, which indicates that the object is not contained.
The adopted raster image is obtained by digitally converting a depth image of an object. The grab in the (j, k) region is defined by parameterizing the average of the observations in the region:
Figure BDA0002396272650000081
wherein, gj,kRepresenting grabbing in the (j, k) region, cj,kRepresents the three-dimensional position of the target grabbing center point,
Figure BDA0002396272650000091
represents phij,kMean value of (phi)j,kRepresents the angle of rotation required for the two fingers to grab the object in the (j, k) area,
Figure BDA0002396272650000092
represents wj,kMean value of (1), wj,kRepresents the angle of the two fingers required to open for grasping the object in the (j, k) area,
Figure BDA0002396272650000093
represents qj,kMean value of qj,kRepresenting the gripping quality of the two fingers in the (j, k) region. The average observed value was calculated as follows. For a single cell, the average grab quality observation q is given by:
Figure BDA0002396272650000094
wherein N isqRepresents nqThe set of (a) and (b),
Figure BDA0002396272650000095
representing a subscript of nqQ value of (2).
Mean angle of rotation
Figure BDA0002396272650000096
Is the vector mean of the angle observations weighted by the corresponding grabbed mass observations:
Figure BDA0002396272650000097
Figure BDA0002396272650000098
Figure BDA0002396272650000099
wherein pi represents a grid diagram
Figure BDA00023962726500000910
The sum of the sine values of all the grabbing angles, psi represents the sum of the cosine values of all the grabbing angles, NqRepresents nqSet of (2), NφRepresents nφA collection of (a).
Average opening width of one unit
Figure BDA00023962726500000912
Only the average of n observations:
Figure BDA00023962726500000911
wherein n represents the number of w values
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents or improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. The utility model provides a system that can show in real time and snatch position appearance which characterized in that: the system comprises a mechanical arm part, a camera part, a target object and a computer;
the mechanical arm part comprises a six-degree-of-freedom mechanical arm (1) and two-finger clamping jaws (2), the six-degree-of-freedom mechanical arm (1) is connected with the two-finger clamping jaws (2), and the two-finger clamping jaws (2) are installed at the tail end of the six-degree-of-freedom mechanical arm (1);
the camera part comprises a depth camera (3), and the camera depth camera (3) is connected with a computer (5); the depth camera (3) is arranged right above the two-finger clamping jaw (2);
the computer (5) comprises an algorithm processing unit, and the algorithm processing unit is used for calculating the grabbing poses of the mechanical arm, the camera and the target object;
the target object is placed below the depth camera (3), the mechanical arm part moves from top to bottom along a track, in the process, the depth camera (3) acquires a depth image of the target object and sends the depth image to the computer (5), an information entropy diagram is obtained through processing of an algorithm processing unit of the computer (5), and a change diagram of the optimal grabbing pose of the target object, the depth image acquired from the depth camera (3) and the information entropy diagram are displayed on a display screen of the computer (5).
2. The system capable of displaying the grabbing pose in real time according to claim 1, wherein: the six-degree-of-freedom mechanical arm (1) is a UR5 industrial mechanical arm, and the two-finger clamping jaw (2) is an RG2 clamping jaw.
3. The system capable of displaying the grabbing pose in real time according to claim 1, wherein: the depth camera (3) is an Intel Realsense D435 i.
4. The system capable of displaying the grabbing pose in real time according to claim 1, wherein: the computer system used by the algorithm processing unit is Ubuntu 16.04, and the robot operating system is ROS Kinetic.
5. The system capable of displaying the grabbing pose in real time according to claim 1, wherein: the mechanical arm part motion trajectory is defined as follows:
p represents the three-dimensional position of the camera in the downward movement process of the mechanical arm (1) according to a certain track;
k represents the number of a series of P points in the downward movement process of the mechanical arm (1) according to a certain track;
Γ={p0,...,pk}: a random trajectory consisting of K P points;
p0the position of the camera before the mechanical arm starts to move is shown, and the corresponding vertical height is zmax
pkThe position of the camera before the mechanical arm starts to move is shown, and the corresponding vertical height is zmin
6. The system capable of displaying the grabbing pose in real time according to claim 1, wherein: after the depth camera is started by the computer (5), the depth image is sent to the computer (5), and the updating frequency is 80fps, so that the continuity of data transmission is ensured.
7. The system capable of displaying the grabbing pose in real time according to claim 1, wherein: the communication mode of the six-degree-of-freedom mechanical arm (1), the depth camera (3) and the computer (5) is communication under ROS, specifically, the computer (5) is connected with the six-degree-of-freedom mechanical arm (1) through a network cable, then the six-degree-of-freedom mechanical arm (1) is started, and a control instruction is sent to the six-degree-of-freedom mechanical arm (1); the computer (5) receives the depth information acquired by the depth camera (3) from the target object in real time, inputs the depth information into an internal algorithm processing unit, outputs the optimal grabbing pose of the target object after the depth information is processed by the algorithm processing unit, and displays the optimal grabbing pose through a screen of the computer (5) so as to display the optimal grabbing pose of the target object in real time, wherein the time of algorithm processing is less than 0.5 s.
8. The system capable of displaying the grabbing pose in real time according to claim 1, wherein: the algorithm processing unit calculates the grabbing pose of the target object and is defined as follows:
defining the grabbing pose of the target object: g ═ c, Φ, w, q) denotes the parameters involved in a complete gripping movement;
c ═ x, y, z) represents the three-dimensional coordinates of the target object gripping point, i.e. the target position that the gripping jaw needs to reach;
x, Y and Z respectively represent coordinates of X, Y and Z axes in a Cartesian coordinate system and are in mm;
phi ∈ [0, pi ] represents the angle that the clamping jaw needs to rotate to grab the target object;
w represents the width of the clamping jaw required to be opened for grabbing the target object, and the unit is mm;
q ∈ [0,1] represents the quality of grabbing, and the larger the stipulated value is, the higher the grabbing success rate is proved;
in order to combine the observation along the viewpoint track at the time step, the working spaces of the six-degree-of-freedom mechanical arm and the two fingers are represented as two-dimensional grid graphs M of J x K units, and J and K respectively represent the length and the width of the two-dimensional grid graphs M; each unit corresponds to a u x u physical area and serves as a unit square grid, and u represents the size of the unit square grid;
in each cell (j, k), corresponding to a unit cell of u, (j, k) representing a physical region of j k, j, k representing the length and width of the region, respectively, a gripping quality observation (q) is added to a vector qj,kIn, discretizing into nqInterval, nqRepresenting grid diagram row coordinates, and combining the gripping quality and the angle observation value to form (q, phi) to be recorded in a two-dimensional histogram mj,kIn each case are separated by nq*nφInterval with abscissa of nφThe numerical value represents the rotation angle, and the column coordinate is nqThe value size represents the grabbing quality, the vector of the values represents the distribution of observation points at each point and forms the basis of information acquisition, the number in each square grid is defined in a grid graph to represent the probability, and the larger the number is, the information gain (n) in the square grid area is represented (the larger the number is, the higher the probability is)q,nφ) The larger the grid image is, whether the area contains the object is determined according to the number condition in the grid image, the squares with large numbers contain the object, the squares with small numbers indicate that the object is not contained.
9. The system capable of displaying the grabbing pose in real time according to claim 8, wherein: the grid map is obtained by digitally transforming a depth image of the object, and the capture in the (j, k) region is defined by parameterizing the average of the observations in the region:
Figure FDA0002396272640000031
wherein, gj,kRepresents grabbing within the (j, k) region; c. Cj,kRepresenting the three-dimensional position of the target grabbing center point;
Figure FDA0002396272640000032
represents phij,kMean value of (phi)j,kRepresenting the angle of the object to be rotated when the two fingers grab in the (j, k) area;
Figure FDA0002396272640000033
represents wj,kMean value of (1), wj,kRepresenting the angle of the object to be gripped by the two fingers in the (j, k) area;
Figure FDA0002396272640000034
represents qj,kMean value of qj,kRepresenting the gripping quality of the two fingers in the (j, k) region.
10. The system capable of displaying the grabbing pose in real time according to claim 8, wherein: the average observations are calculated as follows, and for a single cell, the average grab quality observation q is given by:
Figure FDA0002396272640000041
wherein N isqRepresents nqThe set of (a) and (b),
Figure FDA0002396272640000042
representing a subscript of nqQ value of (1);
mean angle of rotation
Figure FDA0002396272640000043
Is the vector mean of the angle observations weighted by the corresponding grabbed mass observations:
Figure FDA0002396272640000044
Figure FDA0002396272640000045
Figure FDA0002396272640000046
wherein pi represents a grid diagram
Figure FDA0002396272640000047
The sum of the sine values of all the grabbing angles, psi represents the sum of the cosine values of all the grabbing angles, NqRepresents nqSet of (2), NφRepresents nφA set of (a);
average opening width of one unit
Figure FDA0002396272640000048
Average of n observations:
Figure FDA0002396272640000049
where n represents the number of values of w.
CN202010132892.6A 2020-02-29 2020-02-29 System capable of displaying grabbing pose in real time Active CN111360826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010132892.6A CN111360826B (en) 2020-02-29 2020-02-29 System capable of displaying grabbing pose in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010132892.6A CN111360826B (en) 2020-02-29 2020-02-29 System capable of displaying grabbing pose in real time

Publications (2)

Publication Number Publication Date
CN111360826A true CN111360826A (en) 2020-07-03
CN111360826B CN111360826B (en) 2023-01-06

Family

ID=71200197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010132892.6A Active CN111360826B (en) 2020-02-29 2020-02-29 System capable of displaying grabbing pose in real time

Country Status (1)

Country Link
CN (1) CN111360826B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113499138A (en) * 2021-07-07 2021-10-15 南开大学 Active navigation system for surgical operation and control method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573221A (en) * 2018-03-28 2018-09-25 重庆邮电大学 A kind of robot target part conspicuousness detection method of view-based access control model
US20180364866A1 (en) * 2016-01-29 2018-12-20 Abb Schweiz Ag Method for calibrating touchscreen panel with industrial robot and system, industrial robot and touchscreen using the same
US20190381670A1 (en) * 2018-06-17 2019-12-19 Robotic Materials, Inc. Systems, Devices, Components, and Methods for a Compact Robotic Gripper with Palm-Mounted Sensing, Grasping, and Computing Devices and Components

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180364866A1 (en) * 2016-01-29 2018-12-20 Abb Schweiz Ag Method for calibrating touchscreen panel with industrial robot and system, industrial robot and touchscreen using the same
CN108573221A (en) * 2018-03-28 2018-09-25 重庆邮电大学 A kind of robot target part conspicuousness detection method of view-based access control model
US20190381670A1 (en) * 2018-06-17 2019-12-19 Robotic Materials, Inc. Systems, Devices, Components, and Methods for a Compact Robotic Gripper with Palm-Mounted Sensing, Grasping, and Computing Devices and Components

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113499138A (en) * 2021-07-07 2021-10-15 南开大学 Active navigation system for surgical operation and control method thereof

Also Published As

Publication number Publication date
CN111360826B (en) 2023-01-06

Similar Documents

Publication Publication Date Title
Mandikal et al. Dexvip: Learning dexterous grasping with human hand pose priors from video
Khan et al. Survey on gesture recognition for hand image postures
CN110298886B (en) Dexterous hand grabbing planning method based on four-stage convolutional neural network
US9844881B2 (en) Robotic device including machine vision
CN109359514B (en) DeskVR-oriented gesture tracking and recognition combined strategy method
Zhang et al. Sim-to-real transfer of visuo-motor policies for reaching in clutter: Domain randomization and adaptation with modular networks
EP3853765A1 (en) Training a deep neural network model to generate rich object-centric embeddings of robotic vision data
CN114851201A (en) Mechanical arm six-degree-of-freedom vision closed-loop grabbing method based on TSDF three-dimensional reconstruction
CN111360826B (en) System capable of displaying grabbing pose in real time
Hu et al. Trajectory image based dynamic gesture recognition with convolutional neural networks
JP2003271933A (en) Face detector, face detecting method, and robot device
CN113001552A (en) Robot operation cooperative grabbing method, system and equipment for impurity targets
JP4877810B2 (en) Learning system and computer program for learning visual representation of objects
JP2000040147A (en) Handshake recognition device
CN113538576A (en) Grabbing method and device based on double-arm robot and double-arm robot
CN115533895B (en) Two-finger manipulator workpiece grabbing method and system based on vision
CN114700949B (en) Mechanical arm smart grabbing planning method based on voxel grabbing network
CN115810188A (en) Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image
CN114998573A (en) Grabbing pose detection method based on RGB-D feature depth fusion
Jo et al. Bin picking system using object recognition based on automated synthetic dataset generation
Zhu et al. A robotic semantic grasping method for pick-and-place tasks
Gu et al. Automated assembly skill acquisition through human demonstration
Lu et al. Research on Autonomous Grasping of Target Based on Machine Vision
Venkatesh et al. Multi-instance aware localization for end-to-end imitation learning
Tsai et al. Data-driven visual picking control of a 6-DoF manipulator using end-to-end imitation learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant