WO1998006015A1 - Technique de commande du mouvement pour position d'apprentissage de robot - Google Patents

Technique de commande du mouvement pour position d'apprentissage de robot Download PDF

Info

Publication number
WO1998006015A1
WO1998006015A1 PCT/JP1997/002766 JP9702766W WO9806015A1 WO 1998006015 A1 WO1998006015 A1 WO 1998006015A1 JP 9702766 W JP9702766 W JP 9702766W WO 9806015 A1 WO9806015 A1 WO 9806015A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
teaching
movement
mark
state
Prior art date
Application number
PCT/JP1997/002766
Other languages
English (en)
Japanese (ja)
Inventor
Atsushi Watanabe
Tetsuaki Kato
Atsuo Nagayama
Hidetoshi Kumiya
Original Assignee
Fanuc Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fanuc Ltd filed Critical Fanuc Ltd
Publication of WO1998006015A1 publication Critical patent/WO1998006015A1/fr

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/408Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by data handling or data format, e.g. reading, buffering or conversion of data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/36Nc in input of data, input key till input tape
    • G05B2219/36404Adapt teached position as function of deviation 3-D, 2-D position workpiece
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/36Nc in input of data, input key till input tape
    • G05B2219/36412Fine, autonomous movement of end effector by using camera
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/36Nc in input of data, input key till input tape
    • G05B2219/36414Compare image detected path with stored reference, difference corrects position
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/50Machine tool, machine tool null till machine tool work handling
    • G05B2219/50138During setup display is red, after setup display is green colour

Definitions

  • the present invention relates to a movement control method for teaching a robot position, and more particularly, to a movement control method for quickly moving a robot to a desired teaching position using a visual target and a visual sensor for recognizing the same. .
  • the most commonly used robot position teaching method is the jog feed (robot operation by manual input) method.
  • the operator operates the jog feed button of the teaching operation panel to move the robot to the desired teaching position, and teach the robot at that position.
  • the offline teaching is performed, when the rough position teaching is completed, or when the teaching position is corrected, the teaching position is finely adjusted by jog feed.
  • the operation of moving the robot to the desired position is performed while visually confirming the relative position and posture relationship between the robot hand (or end F ekta) and the desired teaching point, so that the operator is skilled. Cost.
  • the number of points (positions / postures) that need teaching is usually large, it is not unusual to spend a lot of time on teaching work.
  • the teaching accuracy tends to vary, and there is a problem in reliability.
  • An object of the present invention is to provide a movement control method for teaching a position of a robot, which can reduce a burden on an operator during a teaching operation and improve efficiency and reliability of the teaching operation.
  • the present invention has been made in view of the above-mentioned problems of the related art, and has been made possible by using a visual sensor and appropriate visual target means so that a robot can be autonomously moved to a desired teaching position. It solves the problem S in the conventional technology.
  • the steps of the method according to the present invention include: a teaching means for expressing a desired position attainment state of the robot; a control means of a system including the robot and the visual sensor;
  • a teaching stage is included.
  • the step of moving to the desired teaching position includes an autonomous movement execution step of performing the movement to the desired teaching position autonomously based on the stored teaching desired position reaching depression expression data. This is an important feature of the invention.
  • the desired teaching position reaching state expression data is expressed using the visual target means recognizable by the visual sensor.
  • An equivalent teaching desired position reaching equivalent state is acquired through recognition by the visual sensor.
  • the visual target means is used as a guide visual target means for guiding the robot to the desired teaching position through recognition by the visual sensor.
  • the software processing executed in the control means for the autonomous movement is such that the recognition state of the guide visual target means by the visual sensor is equal to the recognition state corresponding to the teaching desired position reaching state expression data. It is executed to guide the robot to the robot.
  • the visual target means can take the form of a marking means that includes a mark coordinate system recognizable by a visual sensor.
  • the desired teaching position reaching state expression data recorded in the desired teaching position reaching state recording stage includes data representing the position S and posture of the mark coordinate system on the image.
  • this mark means is used to prepare a mark coordinate system at a position having a certain relative relationship with the desired teaching position as in order to provide a guide visual target means for autonomous movement of the robot. .
  • the movement control based on the software processing executed in the control means causes the recognition state of the mark coordinate system by the visual sensor to match the recognition state corresponding to the teaching desired position reaching state expression data.
  • the robot is guided.
  • the mark means, a mark member that can be fixed on the surface of the representative work to be taught can be used. On this mark member, a mark coordinate system is drawn by a dot pattern or the like.
  • Another form of available spectator target means is the use of robot-supported optical
  • the light beam projection direction is adjusted, and a light spot is formed at the position of the teaching target point on the reference projection surface.
  • This state corresponds to a teaching position reaching equivalent state for the visual sensor. Therefore, the light spot I is recognized by the visual sensor, the teachings desired position reached Fukutai representation data to include data representing the position on the optical spot Bok image Ru is acquired 0
  • the autonomous movement step is performed in a state in which the reference light projecting surface is removed and the light beam projecting direction viewed from the camera is maintained in an equivalent state where the teaching desired position is reached. That is, the movement control of the robot in the autonomous movement execution stage is performed using a light spot formed by projecting a light beam on a surface where a desired teaching position exists as a guide visual target means.
  • the software processing for robot movement control is performed so that the robot's recognition state of the optical spot by the visual sensor matches the recognition state corresponding to the teaching desired position S arrival state expression data. Performed to guide.
  • jog feed Prior to autonomous movement to the desired teaching position, jog feed may be performed to perform a preliminary approach to the desired teaching position ⁇ .
  • the software processing at the stage of executing the autonomous movement to the desired teaching position includes the data representing the position on the image of the guide visual target means woven by the visual sensor at that time and the teaching desired position reaching state.
  • the process of narrowing down the expression data, the process of controlling the movement of each robot based on the comparison result, and the process of judging the completion / non-completion of reaching the desired teaching position are described in the teaching request. It is possible to repeatedly return until the completion of the arrival state is determined.
  • the visual target observed when reaching the desired teaching position is
  • the desired teaching position can be efficiently achieved without placing a burden on the operator.
  • the convenience of the robot during position teaching work can be further improved. Therefore, even when the number of desired teaching positions is large, the burden of teaching work can be reduced.
  • FIG. 1 is a diagram conceptually illustrating an overall image of a movement control method for teaching robot position B according to the present invention.
  • FIG. 2 is a schematic block diagram of a main part of the system used in the present embodiment, focusing on the robot control device.
  • FIG. 3 is a diagram showing a schematic configuration of a panel surface of the teaching operation panel 40.
  • FIG. 4 is a diagram for explaining the operation of the autonomous movement in the embodiment according to the method 1.
  • FIG. 5 is a diagram illustrating a configuration of a mark member used in the embodiment according to the method 1.
  • FIG. 6 is a diagram illustrating a vector describing a viewing direction in which a camera looks at a mark coordinate system in an embodiment according to method 1.
  • FIG. 7 is a diagram for explaining an outline of an algorithm necessary for the autonomous movement of the desired teaching position in the embodiment according to the method 1.
  • FIG. 8 is a flowchart illustrating an outline of processing of the desired teaching position S autonomous movement in the embodiment according to the method 1.
  • FIG. 9 is a diagram for explaining P R 1 P R 3 among various stages of the preparation work in the embodiment according to the method 2.
  • FIG. 10 is a diagram for explaining P R4 to P R6 of various stages of the preparation work in the embodiment according to the method 2.
  • FIG. 11 is a diagram for explaining T H1 T H3 among various stages of the teaching operation in the embodiment according to the method 2.
  • FIG. 12 is a diagram for explaining TH4 and TH5 among various stages of the teaching operation in the embodiment according to the method 2.
  • FIG. 13 is a diagram for explaining a method of obtaining the vector ⁇ eh> in the embodiment according to the method 2.
  • FIG. 14 is a diagram for explaining the vector dm>. ⁇ Ds> in the embodiment according to the method 2.
  • FIG. 15 shows the first half (L1 to L12) of the flowchart illustrating the outline of the processing algorithm for controlling the autonomous movement in the embodiment according to the method 2.
  • FIG. 16 shows the latter half (L13 to L19) of the flowchart illustrating the outline of the processing algorithm for controlling the autonomous movement in the embodiment according to the method 2.
  • Figure 17 shows the outline of the operation procedure and processing of the teaching operation panel when switching from the jog feed mode to the autonomous movement mode in the system in the method 1 or method 2 embodiment. This is shown in the flowchart.
  • FIG. 1 is a diagram conceptually illustrating an overall image of a movement control method for teaching a position of a robot according to the present invention.
  • Reference numeral 1a denotes a desired teaching position B reaching state expression data input means for inputting data representing the desired teaching position reaching state.
  • the teaching desired position attainment state expression data input means 1a is provided for the visual target B1 and the robot hand 2a in the prone state in which the movement of the mouth bot 2 to the teaching desired position with respect to the visual target B1 is completed. It has a function of inputting data (hereinafter, referred to as “desired position attained state expression data”) representing the relative positional relationship between the memory 4 in advance.
  • the desired teaching position E is exemplified by four symbols Al to A4 on the surface 3a of the representative work 3.
  • Reference symbols B1 to B4 drawn in the vicinity of Al to A4 are visual target means representing the positions represented by Al to A4, and are given in a certain relationship with the corresponding desired teaching positions Al to A4.
  • Can be Specific examples of spectator target means will be described later.
  • each visual target position Bl to B4 and each desired teaching position will be described later.
  • Al ⁇ A4 are given so that they are close to each other but do not match, but in some cases it is possible to match them.
  • Reference numeral 2a indicates the robot's hand position
  • reference numeral 2b indicates the robot's teaching point (usually the tool tip point; TCP).
  • the hand position 2a of the robot is represented by the origin of the coordinate system set on the flange at the end of the final arm.
  • the operator operates the robot manual operation means (robot operation command manual input means; teaching operation panel, etc.) 5 such as a jog feed button, and ⁇ the robot 2 to the movement control means 6.
  • the movement control for the autonomous movement to the desired teaching position is performed.
  • the movement control for this autonomous movement is based on the desired desired position reaching expression data input by the desired desired position expression data input means 1a and the robot hand at each time point (camera fixed to this).
  • Etc.) and the visual target means are performed based on relative position recognition data representing the relative position relationship.
  • the above-described relative position recognition data is provided at least once by the robot hand-tip visual evening-to-get relative position recognition means 1b in the movement control process.
  • TCP2b In the state of reaching the desired teaching position, TCP2b naturally coincides with one of the desired teaching positions A1 (or one of A2 to A4) (exemplified by reference numeral 2b ').
  • the teaching desired position attainment state expression data inputting means 1a and the relative position recognizing means 1b are embodied using a visual sensor. Also see :!
  • the target means Bl to B4 are embodied in the form of a mark coordinate system recognizable by a visual sensor or a light spot formed by a laser beam.
  • method 1 a form in which the visual target means is embodied in the form of a mark coordinate system
  • method 2 a form in which the visual target means is embodied in the form of an optical spot formed by a laser beam
  • FIG. 2 is a schematic block diagram of a main part of a system used in the present embodiment, focusing on a robot control device.
  • the robot controller E designated as a whole by reference numeral 30, is equipped with a processor board 31.
  • the processor board 31 is a central processing unit (hereinafter referred to as a CPU) 3 composed of a microprocessor. It has 1 a, ROM 3 lb and RAM 31 c.
  • the CPU 31a controls the entire robot controller in accordance with the system program stored in the ROM 31b.
  • RAM 31c in addition to the created operation programs and various set values, the processing on the mouth-bottom side required to execute autonomous movement to the desired teaching position according to method 1 or method 2
  • the program that defines the parameters and related setting values are stored. Further, a part of the RAM 31 C is used for temporal data storage for calculation processing performed by the CPU 31 a.
  • a hard disk drive prepared as an external device is used as appropriate.
  • the processor board 31 is connected to a bus 37, and commands and data are exchanged with other parts in the robot controller 30 via the bus connection.
  • the digital servo control circuit 32 is connected to the processor board 31, and drives the servo motors 51 to 56 via the servo amplifier 33 in response to a command from the CPU 31 a.
  • the servo motors 51 to 56 for operating the respective axes are built in the mechanism of each axis of the robot 2.
  • the serial port 34 having a built-in communication interface is connected to the bus 37, while being connected to the teaching operation panel 40 having a liquid crystal display, the image processing device 20, and the laser oscillator 60.
  • the laser oscillator 60 is used in the method 2 and is not required in the method 1.
  • the teaching operation panel 40 has a size and weight that can be carried by an operator, and a jog sending button or the like used as a robot manual operation means is provided on a panel thereof.
  • a bus 37 is connected to an input / output device (digital IZO) 35 for digital signals and an input / output device (analog I / O) 36 for analog signals.
  • the control unit of the end effector is connected to the digital module 1035 or the analog IZ036.
  • an application of an arc welding robot is considered, and a power supply device of an arc welding torch is connected to digital I / 035.
  • the image processing device 20 is an ordinary device in which a program memory, a frame memory, an image processor, a data memory, a camera interface, and the like are combined with a CPU on a bus.
  • the camera 21 is connected to the image processing device S20 via a camera interface. This camera can be viewed in the manner described below:
  • the program memory stores image analysis program data required by the method 1 or 2.
  • FIG. 3 is a diagram showing a schematic configuration of a panel surface of the teaching operation panel 40.
  • the display screen 41 is, for example, a liquid crystal screen, on which detailed data of a movement command program and the like are switched and displayed.
  • the function keys 42 are keys for selecting a menu displayed at the lower end of the display screen 41.
  • the teaching operation panel valid switch 43 is a switch for switching whether the operation of the teaching operation panel 40 is valid or invalid.
  • the emergency stop button 4 4 is a button for stopping the operation of the robot 2 in an emergency.
  • the cursor keys 45 are keys for moving a cursor displayed on the display screen 41.
  • the numeric keypad 46 is provided with numeric keys and other keys, and can input and delete numerical values and characters.
  • Jog feed buttons 4 7 are used to input the movement command by specifying the translational Z rotation direction and the + — direction in the normal mode for performing the jog feed of the conventional method.
  • the button is used as an autonomous movement command input means to a desired teaching position, as described later.
  • FIG. 4 is a diagram for sharpening the operation of the autonomous movement in the embodiment according to the method 1.
  • the mark members MK1 to MK4 corresponding to the number of desired teaching points A1 to A4 (four in this case) are provided. Is affixed. On each mark member, the same mark coordinate system is drawn in a dot pattern as described later.
  • the mark member M K1 is affixed with a position E and posture that exactly corresponds to the position and posture of the desired teaching point A1.
  • the mark members MK2 to MK4 are affixed with positions B and postures that correspond exactly to the positions and postures of the desired teaching points A2 to A4.
  • the robot (illustrated only around the hand) has a welding torch 2c and a force camera 21 on the hand, and TCP 2b is set at the tip of the welding torch 2c.
  • the robot is started from the movement start position Ps, and the TCP 2b set at the tip of the welding torch 2c is autonomously moved to the desired teaching points Al to A4 sequentially, and at each desired teaching position.
  • the start position Ps of the autonomous movement is generally arbitrary as long as the first mark MK1 is within the field of view of the camera 21.
  • FIG. 5 is a diagram illustrating a configuration of a mark member used in the embodiment according to the method 1.
  • the mark coordinate system ⁇ is composed of five circular dots DO, Dl, Dl. D2, and D-2 arranged on the mark member MK in a grid pattern with an interval a. I have.
  • the dot interval a is a positive constant value. Therefore, the center position of each dot is DO (0.0.0) .Dl (a.0.0.0) .D-1 (-a, 0,0) .D2 (a, a, 0), D -2 (— a, a. 0).
  • the mark coordinate system may be configured with other patterns as long as it can represent a three-dimensional orthogonal coordinate system.
  • a hole MH is provided at an appropriate fixed position of the mark member MK.
  • the hole MH indicates the desired teaching position, and when pasting on the surface 3a of the representative work 3, the sticking position is selected so that the representative point (for example, the center) coincides with the desired teaching position. It is.
  • the sticking posture is selected by referring to the direction of the mark coordinate system ⁇ . That is, if the sticking position is different, the position to be taught later will also be different. For example, in FIG. 4, the attachment postures of MK1 and MK4 are different by 90 degrees, and the postures taught later are also different by 90 degrees.
  • the robot is moved by jog operation in the normal mode (conventional method), and the tip 2b of the tool coincides with the representative point MA of the hole MH of the mark member MK, and the desired teaching style
  • the camera 21 is used to take an image via the robot controller 30 to acquire an image for creating the desired teaching position expression data. I do.
  • the acquired image is subjected to image processing in the image processing device S20, and data representing the relative position and orientation of the mark coordinate system ⁇ M with respect to the force of the camera 21 is generated (for details, refer to FIG. See below).
  • Figure 7 describes the skeleton of the algorithm required for autonomous movement to the desired teaching position.
  • Mg is equivalent to the desired teaching position attainment state expression data acquired in the above preparation work.
  • C is a matrix that represents the position and orientation of the camera coordinate system as viewed from the flange, and its data is obtained by calibration of the camera 21.
  • the basic equation that defines the relationship among these TO, MO, Tg, and Mg is as follows.
  • TO represents the robot's current position data, which is the sex K data obtained in the mouth robot controller at any time.
  • the data of C is obtained separately by an appropriate camera calibration (an example of calibration will be described later). Therefore, the method of obtaining M0 and Mg will be described.
  • Each of MO and Mg represents the position and attitude (current and at the time of approach completion) in the mark coordinate system ⁇ as viewed from the camera coordinate system.
  • M0 and Mg are homogeneous transformation matrices that represent the position-posture relationship between the three-dimensional orthogonal coordinate systems, and can be placed as in the following equation (4).
  • RM is a 3 ⁇ 3 matrix representing rotation
  • 1 M is a 31 matrix (vector) representing position.
  • vectors ⁇ el> and ⁇ e2> are defined by the following equations (5) and (6).
  • the information obtained from the visual sensor with respect to the mark coordinate system ⁇ H is the direction in which the center of each of the circular dots DO. Dl, D-l. D2, and D-2 can be seen from the origin of the camera coordinate system (line of sight).
  • These directions can be represented by five unit vectors dO>, ⁇ d1>, ⁇ d-l>, ⁇ d2>, and ⁇ d-2>, as shown in Fig. 6.
  • Ddl>, ⁇ d2 >, D ⁇ 2> are all 1 in length, so the absolute value of 5 ij never exceeds 1.
  • C is a matrix representing the position and orientation of the camera coordinate system as viewed from the flange, and its data is obtained by the calibration of the camera 21.
  • Various methods have been known for calibration, and they can be used.
  • Equations (24a) and (24b) are further transformed to obtain the following equations (26) and (27).
  • T 1 and t-1 can be obtained by the following equations (29), (30a) and (30b).
  • equations (31) and (32) are decomposed into the following equations (38) to (41).
  • Equations (40) and (41) can be summarized as the following equation (45). By applying the least squares method, 1 C> can be obtained.
  • the robot Prior to the start of the operation, the robot is sequentially moved to the three positions CB1 to CB3 as described in 3 above.
  • the autonomous movement to the desired teaching position is executed by the processing cycle including the following steps as shown in the flowchart of FIG.
  • step K 2 Compare the stored Mg data with the MO calculated from d0>, ⁇ d1>, ⁇ dl>, ⁇ d2>, and ⁇ d-2>, which you want to find in step K1, and compare them. If it matches, it means that the robot has reached the desired teaching position, so go to step K6. If not, it means that the teaching desired position has not been reached. Therefore, the process returns to step K 1 via steps K 3 to K 5.
  • Various algorithms can be used to evaluate the degree of coincidence between Mg and M 0. For example, the determination index ⁇ is calculated by the following equation (46). If ⁇ ⁇ (£ is a sufficiently small positive value), the approach is completed, and if ⁇ ⁇ , the approach is not completed.
  • FIG. 9 is a diagram for explaining P R1 to P R3 of the various stages of the preparation work
  • FIG. 10 is a diagram for explaining the remaining stages P R4 to P R6.
  • FIG. 11 is a diagram for explaining TH1 to TH3 of various stages of the teaching operation
  • FIG. 12 is a diagram for explaining stage TH4.PR5.
  • the laser head 22 (irradiation head of the laser oscillator 60 shown in FIG. 2) and the reference tool 25 are attached to the tip of the robot represented by the flange position 2 a. Attach.
  • the laser head 22 is mounted via an appropriate arbitration mechanism 23 so that the projection direction of the laser beam 24 can be adjusted.
  • a mark 26 indicating TCP (2b) is provided in a portion indicated by an arrow 27 near the tip of the reference tool 25. However, it should be noted that this mark 26 is for visual observation. As shown, the laser beam 24 generally does not impinge on the mark 26 on the reference tool 25 with only the laser head 22 attached.
  • P R 2 Adjust the direction of the laser head 22 so that the laser beam 24 is incident on the mark 26 on the reference wheel 25. As a result, a light spot is formed at the position of the mark 26, that is, at the position 2b of the TCP. Thereafter, the laser head 22 is fixed in this state.
  • (P R 5) Creates and stores data of the vector eh> as a vector representing the direction in which the laser beam 24 exists as viewed from the origin Oc of the camera coordinate system.
  • Eh> is a vector that satisfies the following two conditions.
  • Condition 1 The laser beam 24 extends in the plane where the vector 24 and the laser beam 24 extend.
  • Condition 2 The vector that is closer to the direction in which the laser head 22 is viewed from the camera 21 out of eh> l ⁇ et>.
  • e h> can be obtained, for example, by the method shown in FIG.
  • the robot is moved above a suitable plane (here, the representative work surface 3a is used) by jog feed to form a spot F of the laser beam 24.
  • a suitable plane here, the representative work surface 3a is used
  • the TCP position 2b moves from the prone position above the surface 3a, and the robot approaches the surface 3a, as shown in (2) of Fig. 13.
  • a state is created in which the TCP position 2b comes below the surface 3a.
  • the position of the spot F moves beyond the intersection N of the line of sight 21 a of the camera 21 looking at the TCP position g (2 b) and the surface 3 a from the lying position of (1) in FIG. To come.
  • the camera 21 shoots an image of the spot F. Then, image processing is performed in the image processing apparatus 20, and data of a vector es> representing the direction of the spot F viewed from the camera coordinate system is obtained.
  • the vector eh> is calculated by the following equation (47) using es>.
  • the robot is moved by jog feed above the representative work surface 3a where the first desired teaching position exists, and the spot F of the laser beam 24 is formed on the representative work surface 3a.
  • the desired teaching position A shall be marked in advance with an appropriate teaching point mark recognizable by the visual sensor.
  • the protruding flange position 2a shown in FIG. 11 be the autonomous movement start position Ps, and start the autonomous movement from the position Ps.
  • the transition from the jog feed to the autonomous movement can be, for example, automatic switching using a visual sensor output.
  • TCP position 2b is above surface 3a, and laser beam 24 forms a spot F on surface 3a via this point.
  • the three points of the teaching point mark A, the spot position SF, and the TCP (2b) are at different positions from each other.
  • the line of sight 2 1a viewing the teaching point mark A from the camera 21 is T
  • T2 The robot is operated by the autonomous movement processing, and the line of sight 21a viewing the teaching point mark A from the camera 21 passes through TCP (2b). It is reasonable for the robot to rotate around the origin Oc of the camera coordinate system.
  • the movement from the autonomous movement start position Ps (expressed in the flange position) to the desired teaching position Pt (also expressed in the flange position) is achieved.
  • the processing algorithm will be described.
  • the symbol> represents the vector as in Method 1, but the definition of the vector is different from Method 1.
  • the autonomous movement is performed based on a processing cycle according to the algorithm outlined in the flowcharts of FIG. 15 (L1 to L12) and FIG. 16 (L13 to L19). You. The following is a description of each step, focusing on the algorithm. The meanings of the symbols in the explanation are as follows.
  • T A matrix representing the robot's position B and its shape on the robot coordinate system
  • R a rotation matrix that describes the relationship between ⁇ d ⁇ > and ⁇ 6> directions, where R ⁇ et>
  • (LI) Flag ⁇ is initialized to 1. Also, 1 is initialized to an appropriate initial value (for example, 3 cm).
  • (L 2) Update the register value representing the data of the matrix T based on the current position of the robot.
  • the teaching point mark A may not be captured in the force camera field of view, and the processing ends. (The operator uses the jog feed to execute the robot. Adjust the position so that the teaching point mark A can be reliably captured in the camera's field of view.) If dm>
  • step L6 when it is determined that the absolute value of the vector dr> is smaller than the threshold value 5r, this step L10 is executed for the first time. An attempt is made to find a unit vector ds> from the image data that represents the line of sight looking at the spot position F from the origin Oc.
  • spot F does not fit in the camera's field of view because the line of sight looking at P and the line of sight of the teaching point mark are substantially the same. Therefore, in most cases, the vector ds> is determined and the process proceeds to step L12. However, if ds> cannot be obtained, the processing is terminated (separately, the cause is investigated).
  • step L 16 If ⁇ of ⁇ 1 and ⁇ 0 is positive, go to step L 18. If negative, go to step L17.
  • step L12 if a judgment output of yes is obtained in step L12, the desired teaching position as shown in (TH5) in FIG. It understands that the arrival state has been realized, stops the robot, and ends the processing.
  • the robot is autonomously controlled so that the position and the posture of the input desired teaching position are input in advance. Movement to the desired teaching position improves work efficiency.
  • control means controls switching of the autonomous movement mode from the jog feed mode by manual command input to the autonomous movement to the desired teaching position. It is preferable to automate the processing by the internal processing of 30 and the image processing apparatus 20).
  • Fig. 17 is a flow chart showing the outline of the operation procedure and processing of the teaching operation panel when switching from jog feed mode to autonomous movement mode inside the system. .
  • the processing of each step is briefly described as follows. In this example, the output of the visual sensor is used to determine the evening of mode switching.
  • step S1 It is determined whether or not the jog feed button 4 7 (any one) has been pressed to issue a jog feed command. If so, the process proceeds to step S2. If not, the process proceeds to step S1. repeat.
  • a predetermined axis is controlled according to the movement content (direction, axis, etc.) specified by the jog feed button 47, and the robot 10 starts jog feed movement. For example, if movement in the J6 axis + direction is specified, create a move command to move the J6 axis in the + direction and pass it to the servo.
  • step S4 From the image processing device 20, it is checked whether a signal indicating that the teaching point mark A (in the case of method 2, and spot F) has entered the field of view of the camera 13 has been received, and received. If yes, go to step S5; otherwise, go back to step S3.
  • a signal indicating that the teaching point mark A in the case of method 2, and spot F
  • step S6 It is determined whether or not the jog feed button 47 has been turned off. If the jog feed button 47 has been turned off, the process proceeds to step S8. If not, the process proceeds to step S7.
  • step S7 It is determined whether or not the teaching desired position has been reached. If so, the process proceeds to step S8. If not, the process returns to step S6.
  • the method of determining whether the desired teaching position has been reached / not reached is also the same as described in the description of Method 1 and Method 2.
  • the robot since the data of the position S on the image of the visual target means, which is observed when reaching the desired teaching position, is input in advance.
  • the target method as a navigation index, the robot can autonomously move to the desired position fi. Therefore, the burden on the operator required for the robot position teaching work is greatly reduced.
  • the transition from jog feed to autonomous movement is performed by automatic switching within the system, thereby further improving the convenience of the robot during the work of teaching the position.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Numerical Control (AREA)
  • Manipulator (AREA)

Abstract

Un élément de repérage (A1) est collé sur une pièce à travailler (3) en rapport avec une position souhaitée d'apprentissage sur ladite pièce (3). Le mouvement du robot est commandé de manière autonome et l'apprentissage de la position exécuté en fonction des positions présentes de l'organe de préhension du robot aux différents moments de son mouvement, en fonction des positions de l'élément de repérage sur le système de coordonnées d'une caméra aux différents moments du mouvement du robot ainsi qu'en fonction de ses postures vues depuis le système de coordonnées de la caméra, en fonction de la position de l'élément de repérage (A1) sur le système de coordonnées de la caméra lorsque le robot parvient à la position souhaitée d'apprentissage ainsi qu'en fonction de la posture du robot vue depuis le système de coordonnées de la caméra et, enfin, en fonction de la position et de la posture du robot sur le système de coordonnées de la caméra vues depuis l'organe de préhension du robot.
PCT/JP1997/002766 1996-08-07 1997-08-07 Technique de commande du mouvement pour position d'apprentissage de robot WO1998006015A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP8/223250 1996-08-07
JP8223250A JP2925072B2 (ja) 1996-08-07 1996-08-07 ロボットの位置教示のための移動制御方式

Publications (1)

Publication Number Publication Date
WO1998006015A1 true WO1998006015A1 (fr) 1998-02-12

Family

ID=16795159

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1997/002766 WO1998006015A1 (fr) 1996-08-07 1997-08-07 Technique de commande du mouvement pour position d'apprentissage de robot

Country Status (2)

Country Link
JP (1) JP2925072B2 (fr)
WO (1) WO1998006015A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102672711A (zh) * 2011-03-14 2012-09-19 库卡实验仪器有限公司 机器人和运行机器人的方法
US8559699B2 (en) 2008-10-10 2013-10-15 Roboticvisiontech Llc Methods and apparatus to facilitate operations in image based systems
CN109514527A (zh) * 2017-09-20 2019-03-26 广明光电股份有限公司 机器手臂的教导***及方法
CN110170995A (zh) * 2019-05-09 2019-08-27 广西安博特智能科技有限公司 一种基于立体视觉的机器人快速示教方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3805317B2 (ja) * 2003-03-17 2006-08-02 ファナック株式会社 教示位置修正方法及び教示位置修正装置
JP4167940B2 (ja) * 2003-05-29 2008-10-22 ファナック株式会社 ロボットシステム
JP4021413B2 (ja) * 2004-01-16 2007-12-12 ファナック株式会社 計測装置
JP2006346790A (ja) * 2005-06-15 2006-12-28 Toyota Motor Corp ロボットと干渉判別方法と干渉判別装置
JP5113623B2 (ja) * 2008-05-20 2013-01-09 ファナック株式会社 計測装置を用いてロボットの位置教示を行うロボット制御装置
JP4982903B2 (ja) * 2008-09-11 2012-07-25 コグネックス・コーポレイション 制御システム、制御方法およびプログラム
EP2444119B1 (fr) 2009-06-15 2016-09-21 Osaka University Stimulateur magnétique
EP2772282B1 (fr) 2011-10-24 2019-03-27 Teijin Pharma Limited Système de stimulation magnétique transcrânienne
JP5850962B2 (ja) 2014-02-13 2016-02-03 ファナック株式会社 ビジュアルフィードバックを利用したロボットシステム
JP2017077609A (ja) * 2015-10-21 2017-04-27 ファナック株式会社 ロボットの手首部の機構パラメータを校正する校正装置および校正方法
CN106736816A (zh) * 2016-11-28 2017-05-31 山东汇川汽车部件有限公司 双卡爪桁架机械手上料控制方法
JP6572262B2 (ja) 2017-06-06 2019-09-04 ファナック株式会社 教示位置修正装置および教示位置修正方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6295606A (ja) * 1985-10-22 1987-05-02 Toshiba Corp 三次元位置設定装置
JPS63105893A (ja) * 1986-10-23 1988-05-11 株式会社日立製作所 ロボットへの動作自動教示方法
JPS6446804A (en) * 1987-08-18 1989-02-21 Shin Meiwa Ind Co Ltd Teaching method for automatic machine tool
JPH0588741A (ja) * 1991-09-26 1993-04-09 Shin Meiwa Ind Co Ltd 産業ロボツト用テイーチング装置及びその位置調整方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6295606A (ja) * 1985-10-22 1987-05-02 Toshiba Corp 三次元位置設定装置
JPS63105893A (ja) * 1986-10-23 1988-05-11 株式会社日立製作所 ロボットへの動作自動教示方法
JPS6446804A (en) * 1987-08-18 1989-02-21 Shin Meiwa Ind Co Ltd Teaching method for automatic machine tool
JPH0588741A (ja) * 1991-09-26 1993-04-09 Shin Meiwa Ind Co Ltd 産業ロボツト用テイーチング装置及びその位置調整方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8559699B2 (en) 2008-10-10 2013-10-15 Roboticvisiontech Llc Methods and apparatus to facilitate operations in image based systems
CN102672711A (zh) * 2011-03-14 2012-09-19 库卡实验仪器有限公司 机器人和运行机器人的方法
CN109514527A (zh) * 2017-09-20 2019-03-26 广明光电股份有限公司 机器手臂的教导***及方法
CN109514527B (zh) * 2017-09-20 2022-02-11 达明机器人股份有限公司 机器手臂的教导***及方法
CN110170995A (zh) * 2019-05-09 2019-08-27 广西安博特智能科技有限公司 一种基于立体视觉的机器人快速示教方法

Also Published As

Publication number Publication date
JP2925072B2 (ja) 1999-07-26
JPH1049218A (ja) 1998-02-20

Similar Documents

Publication Publication Date Title
WO1998006015A1 (fr) Technique de commande du mouvement pour position d'apprentissage de robot
EP3342550A1 (fr) Système manipulateur
WO2020221311A1 (fr) Système de commande de robot mobile basé sur un dispositif portable et procédé de commande
JP2018126835A (ja) ロボットの教示方法、ロボットシステム、プログラム及び記録媒体
JP5292998B2 (ja) ロボット装置の制御方法及びロボット装置
US20150119214A1 (en) Fastening device, robot system, and fastening method for fastening plurality of fastening members
CN110370316B (zh) 一种基于垂直反射的机器人tcp标定方法
WO2016151668A1 (fr) Dispositif d'enseignement et procédé de génération d'informations de commande
KR102121973B1 (ko) 로봇, 로봇의 제어장치 및 로봇의 위치 교시 방법
CN114227681A (zh) 一种基于红外扫描跟踪的机器人离线虚拟示教编程的方法
WO1990006836A1 (fr) Procede de commande de robot pouvant etre corrige manuellement
CN114833832B (zh) 一种机器人手眼标定方法、装置、设备及可读存储介质
JP3998741B2 (ja) ロボットの移動制御方法
JP2011104759A (ja) ロボット制御システムの教示用補助具、その教示用補助具を用いた教示方法、およびその教示方法によって教示を行うロボット制御システム
WO1990000108A1 (fr) Systeme robotique a commande visuelle
JP2015127096A (ja) ロボットシステム
JP3940998B2 (ja) ロボット装置
JP2019077026A (ja) 制御装置、ロボットシステム、制御装置の動作方法及びプログラム
JP2000288432A (ja) 塗装ロボット
JP7401184B2 (ja) ロボットシステム
JPH07210222A (ja) 位置決め制御装置
JP2002036155A (ja) ロボットのエンドエフェクタ
JPH1190868A (ja) ロボット制御装置
CN108527330A (zh) 交互式驱控一体装置、模块式机器人和存储介质
JPH07117403B2 (ja) ロボットの視覚座標校正方法およびシステム

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase