WO2022168634A1 - ロボット制御装置、ロボット制御方法、及びロボット制御プログラム - Google Patents
ロボット制御装置、ロボット制御方法、及びロボット制御プログラム Download PDFInfo
- Publication number
- WO2022168634A1 WO2022168634A1 PCT/JP2022/002177 JP2022002177W WO2022168634A1 WO 2022168634 A1 WO2022168634 A1 WO 2022168634A1 JP 2022002177 W JP2022002177 W JP 2022002177W WO 2022168634 A1 WO2022168634 A1 WO 2022168634A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- state
- particles
- orientation
- robot
- manipulated
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 54
- 239000002245 particle Substances 0.000 claims abstract description 157
- 230000007704 transition Effects 0.000 claims description 80
- 239000012636 effector Substances 0.000 claims description 58
- 230000009471 action Effects 0.000 claims description 44
- 230000008569 process Effects 0.000 claims description 23
- 230000010391 action planning Effects 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 abstract description 9
- 238000004364 calculation method Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 17
- 239000011159 matrix material Substances 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 238000004088 simulation Methods 0.000 description 11
- 238000003860 storage Methods 0.000 description 11
- 230000009466 transformation Effects 0.000 description 10
- 230000008859 change Effects 0.000 description 9
- 230000000007 visual effect Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000001131 transforming effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1687—Assembly, peg and hole, palletising, straight line, weaving pattern movement
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/04—Manufacturing
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40034—Disassembly, for recycling
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40111—For assembly
Definitions
- the present disclosure relates to a robot control device, a robot control method, and a robot control program.
- the relative position and orientation of the end effector and the gripped object are known and do not change during manipulation, and the target position and orientation of the object in the robot coordinate system are The condition was that the value was known.
- the robot moves the target position and orientation of the object by performing a predetermined action without paying attention to the position and orientation of the grasped object.
- the relative position and orientation of the end effector and the gripped object are uncertain and can change during operation.
- the values of the target position and orientation of the object in the robot coordinate system are uncertain.
- these uncertainties increase when the position and orientation of the object to be gripped or the object to be assembled are not determined with high precision by jigs. Therefore, in the conventional technology, even if the robot itself moves according to the predetermined motion, the position and orientation of the gripped object does not reach the target position and orientation, and therefore the operation of gripping the object and attaching it to another object fails. There was a problem that there was a case to do.
- the present disclosure has been made in view of the above circumstances, and aims to provide a robot control device, a robot control method, and a robot control program for controlling a robot so as to operate an operation object with high accuracy. .
- a robot control device changes from a state in which an operation object, which is an operation target, is separated from an object of interest located in an environment to a state in which the operation object is in contact with the object of interest in a specific manner.
- a robot control device for controlling a robot that manipulates the manipulation object so as to transition to the completion state of is an intermediate target state that is a target state in the course of movement of the manipulation object to the completion state, or the a target state setting unit that sets a completed state as a target state of the current movement; an observation unit that acquires sensor observation results regarding the position and orientation of the manipulated object and presence or absence of contact between the manipulated object and the object of interest; setting a set of particles representing uncertainties of a position and orientation of an object to be manipulated, wherein each particle included in the set of particles represents one of possible positions and orientations of the object to be manipulated; and a particle set setting unit that increases the weight of a particle that represents a position and orientation closer to the position and orientation of the operation object indicated by the observation result, and that the observation result indicates that the contact has occurred.
- a particle set adjustment unit that increases the weight of the corresponding particles as the state is closer to the state, and calculates an estimated state that is the position and orientation of the manipulated object estimated based on the set of particles in which the weight of each of the particles is adjusted.
- a state estimating unit that plans the action for moving the manipulated object from the estimated state to the target state of the current movement; and an action unit that commands the robot to execute the planned action.
- the estimated state is the and a processing control unit that repeats until the complete state matches within a predetermined error.
- a robot control method moves from a state in which an operation object to be operated is separated from an object of interest located in an environment to a state in which the operation object is in contact with the object of interest in a specific manner.
- a robot control method for controlling a robot that manipulates the manipulation object so as to transition to the completion state of is an intermediate target state that is a target state in the course of movement of the manipulation object to the completion state, or the A completed state is set as a target state of the current movement, sensor observation results regarding the position and orientation of the manipulated object and presence/absence of contact between the manipulated object and the object of interest are obtained, and the position and orientation of the manipulated object are uncertain.
- each particle included in the set of particles represents one of the possible orientations of the manipulation object; Particles whose position and orientation are closer to the position and orientation of the manipulated object indicated by the result are weighted larger, and when the observation result indicates that the contact has occurred, the object of interest and the manipulated object In the virtual space where the shape and relative positional relationship are expressed, the weight of the corresponding particle is increased as the state of contact between the object of interest and the operation object arranged in the position and orientation represented by each of the particles is closer. calculating an estimated state, which is a position and orientation of the manipulated object estimated based on the set of particles in which the weights of the particles are adjusted, and moving the manipulated object from the estimated state to the target state of the current movement.
- a computer is caused to execute a process of repeating the calculation of the state, the planning of the action, and the execution of the action until the estimated state matches the completed state within a predetermined error.
- a robot control program changes from a state in which an operation object to be operated is separated from an object of interest located in an environment to a state in which the operation object is in contact with the object of interest in a specific manner.
- a robot control program for controlling a robot that manipulates the manipulation object so as to transition to the completion state of is an intermediate target state that is a target state in the middle of the movement of the manipulation object to the completion state.
- the completed state is set as a target state of the current movement, observation results of the position and orientation of the manipulated object and the presence/absence of contact between the manipulated object and the object of interest are acquired by a sensor, and the position and orientation of the manipulated object are acquired.
- an estimated state which is the position and orientation of the manipulated object estimated based on the set of particles in which the weight of each of the particles is adjusted, and moving the manipulated object from the estimated state to the target state of the current movement; planning an action to move to the robot, commanding the robot to execute the planned action, setting the target state, obtaining the observation result, setting the particle set, adjusting the particle set;
- the robot can be controlled so as to operate the manipulation object with high precision.
- FIG. 1 is a diagram showing a schematic configuration of a robot as an example of an object to be controlled;
- FIG. It is a block diagram which shows the hardware constitutions of a motion planning apparatus and a control apparatus.
- 3 is a block diagram showing an example of functional configurations of a motion planning device and a control device;
- FIG. 4 is a diagram showing an example of an assembly procedure using parts A to D;
- FIG. 5 is a diagram showing an example of relative positions of grip data;
- 4 is a block diagram showing an example of the functional configuration of an execution unit of the control device;
- FIG. FIG. 4 is a diagram for explaining a method of generating a guide;
- FIG. It is a sequence diagram showing the flow of processing of the control system of the present embodiment. 4 is a flowchart showing the flow of control processing of the control device;
- FIG. 1 is a diagram showing the configuration of a control system for controlling a robot according to this embodiment.
- the control system 1 has a robot 10 , a state observation sensor 14 , a contact observation sensor 16 , a motion planning device 20 , a state transition control section 25 and a control device 30 .
- the state transition control unit 25 may be part of the operation planning device 20, may be part of the control device 30, or may be a part of the operation planning device 20 and the control device 30 as in the present embodiment. may be configured as an independent device from any of
- FIG. 2 and 3 are diagrams showing a schematic configuration of the robot 10.
- FIG. The robot 10 in this embodiment is a 6-axis vertical articulated robot, and an end effector 12 is provided at the tip 11 a of an arm 11 via a flexible portion 13 .
- the robot 10 grips the parts by the end effector 12 and performs the work of assembling the assembly.
- the end effector 12 is configured to have a pair of hands of the holding portion 12a, but the end effector 12 may be used as a suction pad to suck a component.
- “holding a component” includes sucking the component. In other words, holding a component includes gripping and sucking the component.
- the robot 10 has an arm 11 with 6 degrees of freedom with joints J1 to J6.
- the joints J1 to J6 connect the links so as to be rotatable in the directions of arrows C1 to C6 by motors (not shown).
- a gripper is connected as an end effector 12 to the tip of the arm 11 .
- a vertical articulated robot is taken as an example, but a horizontal articulated robot (scalar robot) may be used.
- a 6-axis robot has been exemplified, a multi-joint robot with other degrees of freedom such as a 5-axis or 7-axis robot, or a parallel link robot may be used.
- the state observation sensor 14 observes the state of the robot 10 and outputs observed data as state observation data.
- a joint encoder of the robot 10 for example, a visual sensor (camera), a motion capture, a force-related sensor, or the like is used.
- the position and orientation of the tip 11a of the arm 11 can be identified from the angles of the joints, and the orientation of the part (work target) can be estimated from the visual sensor and/or the force-related sensor.
- a motion capture marker is attached to the end effector 12
- the position and orientation of the end effector 12 can be identified as the state of the robot 10, and the orientation of the part (workpiece) can be determined from the position and orientation of the end effector 12. can be estimated.
- Force-related sensor is a general term for force sensors and torque sensors, and also includes tactile sensors when sensors are provided in areas that come into contact with parts.
- a force-related sensor may be provided on the surface of the part where the end effector 12 grips the part or on the joint inside the end effector 12 so as to detect the force that the end effector of the robot 10 receives from the part.
- a force-related sensor is, for example, a sensor that detects a single-element or multi-element, single-axis, three-axis, or six-axis force as the state of the robot 10 . By using the force-related sensor, it is possible to accurately grasp how the end effector 12 grips the part, that is, the posture of the part, and to perform appropriate control.
- the visual sensor can also detect the position and orientation of the end effector 12 itself and the part gripped by the end effector 12 as the state of the robot 10 .
- various sensors that are the state observation sensor 14 can detect the state of the end effector 12 and the gripped part. Moreover, detection results of various sensors can be acquired as state observation data.
- the contact observation sensor 16 is a pressure sensor, force sensor, or touch sensor.
- a tactile sensor is a sensor that detects pressure distribution, or a sensor that can detect forces in orthogonal three-axis directions and moments around three orthogonal axes.
- a pressure sensor or a tactile sensor is provided, for example, on a portion of the finger of the end effector 12 that comes into contact with the object to be grasped.
- a force sensor is provided, for example, on the wrist portion between the arm 11 and the end effector 12 of the robot 10 .
- FIG. 3 is a block diagram showing the hardware configuration of the motion planning device 20 and the control device 30 according to this embodiment.
- the motion planning device 20 and the control device 30 can be implemented with similar hardware configurations.
- the operation planning device 20 includes a CPU (Central Processing Unit) 20A, a ROM (Read Only Memory) 20B, a RAM (Random Access Memory) 20C, a storage 20D, an input section 20E, a display section 20F, and a communication interface (I/F) 20G. have Each component is communicably connected to each other via a bus 20H.
- the control device 30 has a CPU 30A, a ROM 30B, a RAM 30C, a storage 30D, an input section 30E, a display section 30F, and a communication I/F 30G. Each component is communicably connected to each other via a bus 30H.
- the case of the motion planning device 20 will be described below.
- programs are stored in the ROM 20B or the storage 20D.
- the CPU 20A is a central processing unit that executes various programs and controls each configuration. That is, the CPU 20A reads a program from the ROM 20B or the storage 20D and executes the program using the RAM 20C as a work area. The CPU 20A performs control of each configuration and various arithmetic processing according to programs recorded in the ROM 20B or the storage 20D.
- the ROM 20B stores various programs and various data.
- the RAM 20C temporarily stores programs or data as a work area.
- the storage 20D is configured by a HDD (Hard Disk Drive), SSD (Solid State Drive), or flash memory, and stores various programs including an operating system and various data.
- the input unit 20E includes a keyboard and a pointing device such as a mouse, and is used for various inputs.
- the display unit 20F is, for example, a liquid crystal display, and displays various information.
- the display unit 20F may employ a touch panel system and function as the input unit 20E.
- the communication interface (I/F) 20G is an interface for communicating with other devices, and uses standards such as Ethernet (registered trademark), FDDI, or Wi-Fi (registered trademark), for example.
- FIG. 4 is a block diagram showing an example of the functional configuration of the motion planning device 20 and the control device 30. As shown in FIG. Note that the motion planning device 20 and the control device 30 may be integrated.
- the motion planning device 20 has a transition creating unit 110 and a grasped data specifying unit 112 as functional configurations.
- Each functional configuration of the operation planning device 20 is realized by the CPU 20A reading a program stored in the ROM 20B or the storage 20D, developing it in the RAM 20C, and executing it.
- the control device 30 has an acquisition unit 130 and an execution unit 132 as functional configurations.
- Each functional configuration of the control device 30 is realized by the CPU 30A reading a program stored in the ROM 30B or the storage 30D, developing it in the RAM 30C, and executing it.
- the assembly process by the work of the robot 10 is represented by a state transition diagram.
- the entire state transition diagram is decomposed into individual parts or assemblies, and the data representing the transition of the elements is represented as unit state transition data.
- the operation planning device 20 creates each piece of unit state transition data as an operation plan, and accepts registration information necessary for creating unit state transition data by input from the user.
- the registration information includes various types of information such as part information (part ID, type, etc.), CAD information for each type of end effector 12, assembly procedure (including disassembly procedure), and part gripping position (including orientation).
- the assembly procedure includes relative trajectories of parts recorded during assembly (or disassembly) on the simulation.
- the transition creation unit 110 creates state transition data including each unit state transition data. Each unit state transition data is created from the assembly procedure in the registration information.
- the transition creation unit 110 represents a transition from a state in which the first element and the second element exist independently to a state in which the third element, which is an assembly composed of the first element and the second element, is assembled.
- Create unit state transition data A first element is a part or assembly.
- the second element is also a part or assembly.
- FIG. 5 is a diagram showing an example of an assembly procedure using parts A to D.
- each of (1) to (3) corresponds to the initial state included in the unit state transition data.
- part A and part B correspond to the first element and second element, respectively, and assemble part A and part B
- the assembly of (2) corresponds to the third element.
- the third element is the target state included in the unit state transition data with (1) as the initial state. (2) and later can be similarly applied to the unit state transition data.
- the gripping data identification unit 112 will be described using the procedure for assembling parts A to D as an example.
- the gripped data identifying unit 112 identifies gripped data for each piece of unit state transition data based on the gripped position of the part in the registration information.
- the gripped data specifying unit 112 specifies gripped data when the end effector 12 of the robot 10 grips the first element or the second element, which is the gripped object, for assembly.
- the gripping data is planned values of the relative position and orientation of the end effector 12 and the gripped object when the end effector 12 grips the gripped object.
- the relative position and orientation in the grip data will be described below.
- the planned values of the relative position and orientation are the planned values of the gripping position and gripping orientation when the end effector 12 grips the element.
- the other element for the element set as the object to be grasped becomes the object to be incorporated.
- the position and orientation are represented by six degrees of freedom.
- FIG. 6 is a diagram showing an example of the relative position and orientation of grip data.
- the relative position/orientation R1 of the end effector 12 with respect to the component A is represented by an arrow when the end effector 12 is a suction pad.
- FIG. 6A shows the relative position and orientation R1 of the end effector 12 with respect to the component A when the end effector 12 is a suction pad.
- FIG. 6B shows the relative position and orientation R2 of the end effector 12 with respect to the part B when the end effector 12 is a gripper (hand) having a pair of holding portions.
- Part C and part D are represented similarly.
- the left side represents the component coordinate system
- the right side represents the relative position and orientation (position and orientation) from the end effector 12 .
- the grasping data is calculated as, for example, CAD data with identification ID of the end effector 12 used at the time of grasping and planned values of the target relative position and orientation of the grasped object.
- the planned value of the relative position/orientation may be entered by the user who is the administrator, including the part gripping position (the position to be gripped on the part surface) in the registration information, or may be automatically calculated by an existing method such as a gripping plan. can be calculated by
- the motion planning device 20 outputs the created unit state transition data and the gripping data, which is control data, to the state transition control unit 25 .
- One state transition represented by unit state transition data is also called a task. Instead of outputting each time a task is updated from the action planning device 20, all the unit state transition data and control data included in the state transition data are output to the state transition control unit 25, and the state transition control unit 25
- the side may manage which unit state transition data is to be output. Alternatively, the grasped data may be directly output to the control device 30 without going through the state transition control section 25 and managed by the control device 30 side.
- the state transition control unit 25 outputs unit state transition data corresponding to the task to be processed among the state transition data to the execution unit 132 of the control device 30 .
- the processing target is specified at the start of the assembly work, and is updated each time a notification of task completion is received from the control device 30 .
- the state transition control unit 25 may be included in the motion planning device 20, may be included in the control device 30, or may be a device different from any of them. Further, the entire control system including the operation planning device 20, the control device 30 and the state transition control section 25 may be one device.
- the acquisition unit 130 acquires unit state transition data and grip data to be processed from the state transition control unit 25 .
- the acquisition unit 130 also acquires state observation data obtained by observing the position of the first element and the position of the second element from the state observation sensor 14 . Note that the position of the first element and the position of the second element include posture.
- the acquisition unit 130 also acquires contact observation data, which are sensor values obtained from the contact observation sensor 16 .
- the execution unit 132 uses the unit state transition data output from the state transition control unit 25 to execute tasks.
- a task is, for example, a task to complete the state in which the third element is assembled in the unit state transition data.
- the execution unit 132 causes the end effector 12 to grasp a grasped object, which is one of the first element and the second element, based on the observation data and the grasped data, and obtains a target relative position including a guide described later in the final part.
- the object to be grasped is moved relative to the other element along the trajectory.
- an object located in the environment is called an object of interest
- an object manipulated by the robot 10 with the end effector 12 is called a manipulated object.
- CAD models of both objects are given, both objects are rigid bodies, and there is no error between the CAD model and the actual object shape. It is also assumed that the position and orientation of the object of interest do not change due to some external force. It is assumed that the relative positions and orientations, which are the completion states of the target object and the manipulation object, are given as unit state transition data of the task.
- the target object is the part to be assembled
- the manipulated object is the grasped part.
- the completed state is a relative position/orientation in which the gripped part is assembled to the assembly destination.
- the object of interest is a landmark in the environment
- the manipulated object is the grasped part.
- the completed state is the relative position and orientation from the landmark of the desired destination position.
- the relative position and orientation of the target object and the manipulated object are recognized by visual observation using the state observation sensor 14.
- the positions and orientations of the target object and the manipulated object in the camera coordinate system are recognized by CAD matching.
- the position and orientation of the CAD origin of the manipulation object viewed from the CAD origin of the object of interest is defined as the relative position and orientation.
- the obtained relative position and orientation also include an error.
- the deviation between the true value of the relative position and orientation and the observed value is called observation noise of the system.
- observation noise is also conceptual and cannot be obtained.
- Observed values of the relative position and orientation of the object of interest and the manipulated object are obtained by processing an image captured by a camera. After gripping, it may be acquired by performing calculations based on the angles of the joints of the robot 10 that are measured by encoders.
- the contact between the target object and the manipulated object is recognized from the contact observation sensor 16, which is a pressure sensor, a force sensor, or a tactile sensor.
- the contact observation value of the contact observation sensor 16 is correct and whether or not contact has occurred can be accurately observed.
- the actual movement amount of the manipulation object is The amount of change in a given relative position and orientation differs from the commanded amount of movement. For example, there may be an error in the amount of movement of the TCP (tool center point) due to an error in the mounting position of the robot 10 or an error in computation of forward kinematics. In actual use of the robot 10, some pre-calibration is performed, so the magnitude of these errors is usually small. do.
- the grasping force was insufficient during the period from observation to movement of the manipulated object to the guide start point.
- the position and orientation of the manipulation object with respect to the end effector 12 may change.
- an external force is applied to the manipulation object such as contacting the object of interest while the manipulation object is moving along a guide, which will be described later, the position and orientation of the manipulation object with respect to the end effector 12 may change.
- the manipulation object cannot be moved by the commanded amount of movement.
- the deviation of the actual amount of movement from the commanded amount of movement is called the system noise of the system.
- the coordinate system of the state s be an object coordinate system, which is a relative coordinate system based on the object of interest.
- xt be the position and orientation of the TCP in the robot coordinate system at time t
- Tt be the homogeneous transformation matrix of the position and orientation between xt and st.
- Expression (1) describes the transformation of the position and orientation.
- s t , T t , and x t are homogeneous coordinate systems.
- the homogeneous transformation matrix T t not only transforms the position and orientation of the TCP from the robot coordinate system to the object coordinate system, but also converts the relative position and orientation of the TCP to the relative position and orientation of the manipulated object. It also includes the conversion of That is, while xt is the position and orientation of the TCP, the state st, which is the result of transforming xt by Tt , is the relative position and orientation of the manipulated object, not the TCP.
- the state s t and the position and orientation x t of the TCP are a matrix of 4 rows and 4 columns. can be treated as The fourth column and fourth row are components for handling infinity. Since rotation matrices inherently have only three-dimensional information, these matrices can be effectively treated as six-dimensional.
- the homogeneous transformation matrix Tt is also a matrix of 4 rows and 4 columns. The product of two matrices in homogeneous coordinate system notation corresponds to the addition of positions and orientations in vector notation.
- ⁇ t be the system noise of the system, and let the system noise ⁇ t follow some probability distribution P ⁇ . Note that ⁇ t is also a homogeneous coordinate system.
- the homogeneous transformation matrix T t of st and x t changes by ⁇ t .
- the modified homogeneous transformation matrix is is.
- Equation (2) The state equation of the system is expressed in Equation (2) in terms of matrix operations.
- ⁇ t , u t and a t are homogeneous coordinate systems.
- equation (4) is obtained as the relationship between ⁇ ta t and u t .
- observation action by the state observation sensor 14 and the contact observation sensor 16 will be considered.
- an observation act yields an observed value yt .
- Observations y t are composed of state observations s t ' and contact observations c t '.
- s t ' is the observed value of the relative pose of the object coordinate system. However, it is assumed that s t ' has an error from the true value s t due to observation noise ⁇ .
- ⁇ t be the observed noise of the system, and let the observed noise ⁇ t follow some probability distribution P ⁇ . Note that ⁇ t is also a homogeneous coordinate system.
- Formula (8) describes the state space model as a probability model.
- the state is controlled from the state space model of formula (8) and the probability of obtaining the state when the observation of formula (9) is obtained. That is, the robot 10 manipulates the relative position and orientation between two objects.
- the execution unit 132 performs the task so that, from the initial state in which the operable object to be operated is separated from the target object located in the environment, the operable object comes into contact with the target object in a specific manner.
- the robot 10 that manipulates the manipulation object is controlled so as to transition to the completed state, which is the state in which the manipulation object is being executed.
- the execution unit 132 controls the robot 10 for each task.
- the task is a task of assembling a gripped part to another part, a task of moving the gripped part to another position in the environment, or a task of gripping a part, which will be described later.
- the execution unit 132 includes a guide setting unit 140, an initial setting unit 142, an action planning unit 144, a command conversion unit 146, an observation unit 148, a time update unit 150, an observation update unit 152, as shown in FIG. , a state estimation unit 154 , an iteration determination unit 156 and a target setting unit 158 .
- the target setting unit 158 is an example of a target state setting unit
- the time update unit 150 is an example of a particle set setting unit
- the observation update unit 152 is an example of a particle set adjustment unit
- 146 is an example of an action unit
- the repetition determination unit 156 is an example of a processing control unit.
- the guide setting unit 140 sets a guide.
- the guide is the final part of the relative trajectory of the object to be manipulated with respect to the object of interest, and is a trajectory that is a common target relative trajectory regardless of the initial relative position of the object to be manipulated with respect to the object of interest.
- the guide includes a completed state and a series of intermediate goal states leading up to the completed state.
- the completed state and the intermediate goal state are represented by the object coordinate system.
- the guide setting unit 140 creates a guide for the unit state transition data based on the assembly procedure included in the registration information.
- a guide is a sequence of discrete relative orientations leading to a completed state.
- the completed state is the end of the guide.
- the guide setting unit 140 may be provided in the motion planning device 20. In that case, the guide is also acquired by the control device 30 in the same manner as the unit state transition data.
- FIG. 8 shows a state in which the part D is assembled in the assembled body of the parts A to C and then moved in a direction in which the part D can move without interfering with the assembled body of the parts A to C. It is an example of the trajectory of the position and orientation of D. A trajectory obtained by reversing the movement direction starting from the end point of the trajectory is created as a guide G1.
- the spacing of the states that make up the guide need not be uniform, for example, the states may be densely set in areas where contact is expected.
- the initial setting unit 142 sets the intermediate target state during guiding, which is farthest from the completed state, as the target state of the current movement.
- the initial setting unit 142 is a set of particles representing the uncertainty of the position and orientation of the manipulation object, and each particle included in the set of particles represents one possible position and orientation of the manipulation object. Initialize the set of .
- the position and orientation of the manipulated object represented by the particles are represented by the object coordinate system.
- the action planning unit 144 plans actions for moving the operation object from the initial state or an estimated state described later to the target state of the current movement.
- the estimated state and behavior are represented by an object coordinate system.
- the command conversion unit 146 converts the planned action into a command that can be executed by the robot 10 and outputs the command to the robot 10 .
- the observation unit 148 acquires observation results regarding the relative position and orientation of the target object and the manipulation object from the state observation data acquired by the acquisition unit 130 .
- the observation unit 148 acquires the relative position and orientation of the target object and the manipulation object as observation results represented by the object coordinate system.
- the observation unit 148 acquires observation results regarding the presence or absence of contact between the operation object and the object of interest from the contact observation data acquired by the acquisition unit 130 .
- the contact observation data is converted into a binary value corresponding to contact or non-contact and recognized as a contact observation value.
- the method for converting the sensor value into binary values is not limited, and an existing method may be used.
- the time update unit 150 updates the set of particles in the next step.
- the observation updating unit 152 updates the weight of each particle included in the set of particles. At this time, among the particles, the weights of the particles that represent the position and orientation closer to the position and orientation of the manipulation object indicated by the observation result are updated with a larger weight, and if the observation result indicates that contact has occurred, the object of interest and the In the virtual space where the shapes and relative positional relationships of the manipulated objects are represented, the closer the object of interest and the manipulated objects arranged in the positions and orientations represented by the respective particles are in contact, the greater the weight of the corresponding particles is updated.
- the virtual space is the space described by the object coordinate system.
- the state estimation unit 154 calculates an estimated state, which is the position and orientation of the manipulated object estimated based on the set of particles whose weights have been adjusted.
- the iteration determination unit 156 determines whether the calculated estimated state matches the completed state within a predetermined error. Further, the iterative determination unit 156 performs each process of the goal setting unit 158, the action planning unit 144, the command conversion unit 146, the observation unit 148, the time update unit 150, the observation update unit 152, and the state estimation unit 154 until the estimated state is completed. Repeat until the states match within a given error.
- the target setting unit 158 sequentially sets the intermediate target states being guided from the far side to the near side with respect to the completed state as the target state of the current movement, and sets the completed state as the final target state of the current movement.
- FIG. 9 is a sequence diagram showing the processing flow of the control system 1 of this embodiment.
- the CPU 20A functions as each unit of the operation planning device 20 to perform the operation planning process
- the CPU 30A functions as each unit of the control device 30 to perform control processing.
- step S100 the operation planning device 20 creates state transition data and control data including each unit state transition data.
- the control data is each grip data corresponding to each unit state transition data created.
- step S102 the operation planning device 20 outputs state transition data and control data to the state transition control unit 25.
- step S104 the state transition control unit 25 receives a series of task start instructions and starts the task to be processed.
- the task start instruction is received, the first task to be processed is started, and the tasks are sequentially updated according to the progress of the tasks.
- the task start instruction may be received by the control device 30 .
- step S108 the state transition control unit 25 outputs unit state transition data corresponding to the task to be processed and control data corresponding to the unit state transition data to the control device 30. Note that the state transition control unit 25 first outputs unit state transition data corresponding to the first task in the order of repeated processing, and then outputs unit state transition data corresponding to the next task as the task progresses. do.
- step S114 the acquisition unit 130 of the control device 30 acquires state observation data and contact observation data observed by the state observation sensor 14 and the contact observation sensor 16, and the execution unit 132 executes the task to be processed.
- the task transitions from the initial state in which the operable object to be operated is separated from the target object located in the environment to the completed state in which the operable object is in contact with the target object in a specific manner.
- the robot 10 is controlled as follows.
- step S118 the execution unit 132 of the control device 30 notifies the state transition control unit 25 of task completion.
- step S120 the state transition control unit 25 determines whether or not processing has been completed up to the final state of the state transition data. If it is determined that the process has been completed up to the final state, the process proceeds to step S124. If it is determined that the processing has not ended up to the final state, the process proceeds to step S122.
- step S122 the state transition control unit 25 updates the task to be processed, and returns to step S108. Updating a task is a process of updating the task to be processed to the next task.
- step S108 after updating the task, the unit state transition data and control data (grip data) corresponding to the task to be processed are output to the control device 30, and the subsequent processes are repeated.
- step S124 the state transition control unit 25 ends the task and ends the processing of the control system 1.
- FIG. 10 is a flowchart showing the flow of control processing of the control device 30 in step S114.
- step S130 the guide setting unit 140 generates and sets a guide for the unit state transition data based on the assembly procedure in the registration information. Specifically, a start state s 0 * and an end state s M * for the relative position and orientation of the object of interest and the manipulation object are defined, and a set of M+1 states as a guide.
- the state set s 0:M * is a guide, and the guide is generated using the CAD models of the target object and the manipulation object.
- the manipulation object can be manipulated in a direction in which the manipulation object can move without interfering with the object of interest.
- a trajectory of the position and orientation of the manipulated object is created when the object is gradually moved over M stages. Then, a trajectory is generated as a guide by starting from the end point of the trajectory and reversing the movement direction.
- the spacing of the states that make up the guide need not be uniform, for example, the states may be densely set in areas where contact is expected.
- step S132 the initial setting unit 142 sets the initial value of the target state to the state s0 * of the starting point of the guide.
- step S ⁇ b>134 the observation unit 148 acquires the relative position and orientation of the manipulation object from the state observation data obtained by the state observation sensor 14 .
- the initial setting unit 142 sets particle set Determine the initial value of Some prior knowledge may determine the initial value s 0 (n) of the particles. For example, the initial value s 0 (n) may be determined so as to be uniformly distributed in the region within the maximum error range expected for the observed values of the relative position and orientation of the manipulated object.
- the particle set represents the uncertainty of the relative position and orientation of the manipulated object.
- Each particle represents one of the possible relative orientations of the manipulated object.
- the state estimate ⁇ s t is the relative pose representing the particle set.
- the particle scattering variable ⁇ corresponds to the hypothetical system noise.
- step S138 when the action planning unit 144 designates the current element as s t * and the element of the next step as s t +1 * among the elements s m * of the set of states forming the guide, s t +1 * and the estimated value ⁇ s t of the current state, the amount of movement at in the object coordinate system is determined.
- the estimated value ⁇ s t of the state s t is obtained in step S148, which will be described later.
- the initial value of ⁇ s t is the relative position and orientation obtained in step S134.
- the movement amount may be determined by performing route planning using CAD each time.
- ⁇ Tt corresponds to the estimated value of Tt .
- the operation object is moved by the determined movement amount at.
- the estimated ⁇ T t is used to convert the movement amount at to the TCP control amount ⁇ u t , and the robot 10 is controlled by ⁇ u t .
- control amount u t is obtained by converting the net relative position/orientation movement v ta t into the movement amount of TCP in the robot coordinate system, but u t can not be obtained because v t is unknown. Therefore, control is performed using ⁇ u t .
- step S140 the command conversion unit 146 converts the TCP movement amount ⁇ ut in the robot coordinate system into a command amount for rotation of each joint of the robot 10 by inverse kinematics, and issues an action command to the robot 10. .
- forward kinematics obtains xt as the position and orientation values of the TCP in the robot coordinate system.
- step S144 the time update unit 150 updates the particle set after the action in step S140. by a t and the particle scattering variable ⁇ t update to. Also, the particle weights are normalized to 1/N.
- step S146 the observation update unit 152 updates the weight of each particle included in the set of particles.
- interference is determined as follows. An object of interest drawn using CAD shape data is placed in a simulation space (virtual space) for interference determination. Further, for each particle included in the particle set, the manipulated object having the position and orientation represented by the particle is arranged in the simulation space using the CAD shape data. If the manipulated object arranged based on a certain particle overlaps with the object of interest, it is determined that there is interference, and if the manipulated object and the object of interest are separated and do not overlap, it is determined that there is no interference. Thus, the presence or absence of interference is determined for each particle. Also, an interference distance, which is the closest distance between the surface of the object of interest and the surface of the manipulation object in the simulation, is calculated.
- the particle ensemble is used to perform interference judgment using CAD on a simulation, and the set of obtained interference judgments is and let the set of interference distances be and
- the interference distance d t (n) is the shortest distance between the target object surface and the manipulation object surface of the n-th particle in the simulation. n) >0.
- Equation (16) The likelihood L(y t
- the delta function included in the equation for no real contact returns a value of 1 for particles with no simulated interference and a value of 0 for particles with interference.
- the exponential function part included in formula (16) is a Gaussian function.
- a Gaussian function may not be used as the function system, and a function may be used in which the likelihood is high when the distance is short and the likelihood is low when the distance is long. For example, exponential decay or the like may be used.
- the particle weight W is updated by Equation (17) using the likelihood value obtained from the likelihood function.
- the likelihood function sets the weight of particles with interference in the simulation (particles contrary to the observation result of no contact) to 0, and For a particle, the weight is increased as the state s t (n) of the particle is closer to the observed state s t ′ of the manipulation object.
- the likelihood function weights the particle more as its state s t (n) is closer to the observed state s t ' of the manipulated object. and the effect of increasing the weight of the particles as the shortest distance between the surface of the object of interest and the surface of the manipulated object is smaller.
- the weight of the particles representing a state close to the state in which the surfaces of the target object and the manipulation object are in contact with each other in the simulation is updated to be large, and the manipulation object becomes larger than the target object.
- the weights of the particles representing the state that the object is far away or that the manipulation object penetrates deeply into the object of interest are updated to be smaller.
- step S148 the state estimation unit 154 calculates an estimated state, which is the position and orientation of the manipulated object estimated based on the set of particles whose weights have been adjusted.
- the estimated value ⁇ s t of the state of the manipulated object is corrected so as to approach the observed position and orientation of the manipulated object, and the actual contact is observed by the contact observation sensor 16. If there is, the simulation is modified so that the surfaces of the operation object and the object of interest are in contact with each other.
- Equation (18) the estimated value ⁇ s t of the state at time t is obtained from Equation (18) as the expected value for the particle set.
- log means the logarithm of the matrix
- exp means the exponent of the matrix.
- step S150 the iterative determination unit 156 determines whether or not the state estimate ⁇ s t matches the guide end state s M * within a predetermined error range. If the state estimate ⁇ s t matches the end state s M * of the guide within a predetermined error range, the control process ends. If they do not match, the process proceeds to S152.
- step S152 the target setting unit 158 sets the target state to the next state during guidance. Also, the time is advanced by one step, and t+1 up to that point is set as a new t. Then, the process returns to step S138.
- step S142 may be performed after step S144.
- the robot 10 may be controlled as follows for the part where the manipulation object is moved from the gripped position to the guide start point.
- the movement path from the place where the manipulated object is grasped to the guide start point is created by connecting straight lines or using the conventional Motion Planning method. Further, the movement of the operation object from the gripped position to the guide start point is performed, for example, as follows.
- the position and orientation of the target object in the camera coordinate system are observed from the state observation data obtained by the camera of the state observation sensor 14, and a virtual space having an object coordinate system, which is a coordinate system based on the target object, is developed. . This allows the camera coordinate system and the object coordinate system to be mutually transformed.
- the guide represents the trajectory of the manipulated object rather than the end effector 12 .
- the position and orientation of the manipulated object before being grasped in the camera coordinate system are observed, converted into the position and orientation of the object coordinate system, and the manipulated object is placed in the virtual space. Then, the position and orientation of the end effector 12 in the robot coordinate system are acquired from the state observation data obtained by the encoder.
- the position and orientation of the end effector 12 in the camera coordinate system are observed from the state observation data obtained by the camera, converted into the position and orientation of the object coordinate system, and the end effector 12 is placed in the virtual space. This enables conversion between the robot coordinate system and the object coordinate system. If the correspondence between the camera coordinate system and the robot coordinate system is calibrated in advance, observation of the end effector 12 by the camera can be omitted.
- the end effector 12 grips the object to be manipulated.
- a movement path is planned from the initial position and orientation of the manipulated object when it is gripped to the position and orientation at the guide start point.
- the path plan in the virtual space is converted into a path plan in the robot coordinate system space, and the manipulated object is moved according to the path plan. Note that the error is not eliminated during movement.
- the position and orientation of the manipulated object and the end effector 12 after movement are acquired from the state observation data obtained by the camera, and the manipulated object is moved so as to match the position and orientation at the guide start point.
- the likelihood of each particle is obtained from the observation information and the likelihood function, and the relative positions and orientations are calculated.
- the movement amount of the object is determined from the estimated relative position/posture and the target relative position/posture at the next time.
- a homogeneous transformation matrix for transforming the estimated relative position/orientation into the TCP position/orientation in the robot coordinate system is obtained, the movement amount of the object is converted into the TCP movement amount from the transformation matrix, the robot 10 is controlled, and the two objects are relative to each other. Bring the position/posture to the target relative position/posture.
- the relative position and orientation of the two objects are estimated by the contact detection result between the objects and the collision judgment simulation using CAD, and the movement of the manipulation object is based on the estimation result. Determine quantity.
- the recognition of the relative position and orientation by the state observation sensor 14 includes an error. Even in such an environment where observation is uncertain, it is possible to bring the relative position and orientation between two objects to the target state.
- the operation planning process or control process executed by the CPU reading the software (program) in the above embodiment may be executed by various processors other than the CPU.
- the processor is a PLD (Programmable Logic Device) whose circuit configuration can be changed after manufacturing, such as an FPGA (Field-Programmable Gate Array), and an ASIC (Application Specific Integrated Circuit) to execute specific processing.
- a dedicated electric circuit or the like which is a processor having a specially designed circuit configuration, is exemplified.
- the motion planning process or control process may be executed by one of these various processors, or by a combination of two or more processors of the same or different kind (e.g., multiple FPGAs, and a CPU and an FPGA). , etc.).
- the hardware structure of these various processors is an electric circuit in which circuit elements such as semiconductor elements are combined.
- the program is pre-stored (installed) in the ROM 20B (30B) or the storage 20D (30D), but the present invention is not limited to this.
- the program may be provided in a form recorded on a recording medium such as CD-ROM (Compact Disk Read Only Memory), DVD-ROM (Digital Versatile Disk Read Only Memory), and USB (Universal Serial Bus) memory.
- the program may be downloaded from an external device via a network.
- the end effector 12 is a gripper having a holding portion 12a or a suction pad
- the present invention is not limited to this.
- Other configurations may be used as long as they are configured to hold an object.
- the end effector 12 may be a vacuum chuck, a magnetic chuck, a spatula for scooping an object, or the like.
- unit state transition data not only the completed state but also a series of discrete relative positions and orientations leading to the completed state may be given.
- the guide may be a series of discrete relative positions and orientations leading to a state that is not a complete state.
- the task may be a task of grasping a part that is an object of interest.
- an object positioned in the environment is set as the object of interest, and an object gripped by the end effector 12 is set as the manipulated object.
- the object placed in the environment is the object of interest, and the end effector 12 itself is the object to be manipulated. Since the end effector 12 is manipulated by a robot arm, the end effector 12 can also be considered an object manipulated by the robot.
- the state in which the end effector 12, which is the operation object has gripped the object of interest placed in the environment and has not yet moved the object of interest is the completed state.
- Appendix 1 moving the operable object so as to transition from a state in which the operable object to be operated is separated from the object of interest located in the environment to a complete state in which the operable object is in contact with the object of interest in a specific manner;
- a robot control device for controlling a robot to be operated, memory; at least one processor connected to the memory; including The processor setting an intermediate target state or the completed state, which is a target state in the middle of the movement of the operation object until reaching the completed state, as a target state of the current movement; Acquiring sensor observation results regarding the position and orientation of the manipulated object and presence/absence of contact between the manipulated object and the object of interest;
- a set of particles representing uncertainties in the position and orientation of the manipulation object, wherein each of the particles included in the set of particles represents one possible position and orientation of the manipulation object.
- a non-temporary storage medium storing a program executable by a computer to execute a robot control process for controlling a robot to be operated,
- the robot control process includes: setting an intermediate target state or the completed state, which is a target state in the middle of the movement of the operation object until reaching the completed state, as a target state of the current movement; Acquiring sensor observation results regarding the position and orientation of the manipulated object and presence/absence of contact between the manipulated object and the object of interest;
- a set of particles representing uncertainties in the position and orientation of the manipulation object, wherein each of the particles included in the set of particles represents one possible position and orientation of the manipulation object.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Manufacturing & Machinery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Primary Health Care (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Manipulator (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
図2及び図3は、ロボット10の概略構成を示す図である。本実施形態におけるロボット10は、6軸垂直多関節ロボットであり、アーム11の先端11aに柔軟部13を介してエンドエフェクタ12が設けられる。ロボット10は、エンドエフェクタ12によって部品を把持して組み立て体の組立作業を行う。図3の例ではエンドエフェクタ12は1組の挟持部12aのハンドを有する構成としているが、エンドエフェクタ12を吸着パッドとして、部品を吸着するようにしてもよい。以下の説明においては、部品を吸着することも含めて部品を把持するという。部品を把持することと吸着することとを含めて部品を保持すると言い換えてもよい。
状態観測センサ14は、ロボット10の状態を観測し、観測したデータを状態観測データとして出力する。状態観測センサ14としては、例えば、ロボット10の関節のエンコーダ、視覚センサ(カメラ)、モーションキャプチャ、力関連センサ等が用いられる。ロボット10の状態として、各関節の角度からアーム11の先端11aの位置及び姿勢が特定でき、視覚センサ及び/又は力関連センサから部品(作業対象物)の姿勢が推定できる。モーションキャプチャ用のマーカーがエンドエフェクタ12に取り付けられている場合には、ロボット10の状態としてエンドエフェクタ12の位置及び姿勢が特定でき、エンドエフェクタ12の位置及び姿勢から部品(作業対象物)の姿勢が推定できる。
接触観測センサ16は、圧力センサ、力覚センサ、又は触覚センサである。触覚センサは、圧力分布を検出するセンサ、又は直交3軸方向の力および直交3軸まわりのモーメントを検出できるセンサである。圧力センサや触覚センサは、たとえばエンドエフェクタ12の指の把持対象物に接触する部分に設けられる。力覚センサは、たとえばロボット10のアーム11とエンドエフェクタ12との間の手首部分に設けられる。
次に、動作計画装置20及び制御装置30の構成について説明する。
をガイドとして決定する。
の初期値を定める。何らかの事前知識によって粒子の初期値s0 (n)を定めてもよい。たとえば、操作物体の相対位置姿勢の観測値について想定される最大誤差の範囲内の領域に均一に分布するように初期値s0 (n)を定めてもよい。
以上の実施形態に関し、更に以下の付記を開示する。
操作対象である操作物体が環境中に位置する注目物体から離れている状態から前記操作物体が前記注目物体に特定の態様で接触している状態である完了状態に遷移するように前記操作物体を操作するロボットを制御するロボット制御装置であって、
メモリと、
前記メモリに接続された少なくとも1つのプロセッサと、
を含み、
前記プロセッサは、
前記操作物体の前記完了状態に至るまでの移動の途中の目標状態である中間目標状態又は前記完了状態を今回移動の目標状態として設定し、
前記操作物体の位置姿勢及び前記操作物体と前記注目物体との接触の有無についてのセンサによる観測結果を取得し、
前記操作物体の位置姿勢の不確定さを表す粒子の集合であって、前記粒子の集合に含まれるそれぞれの前記粒子は前記操作物体のありうる位置姿勢の一つを表す、前記粒子の集合を設定し、
前記粒子のうち前記観測結果が示す前記操作物体の位置姿勢に近い位置姿勢を表す粒子ほど粒子の重みを大きくし、かつ、前記観測結果が前記接触が生じていることを示す場合には前記注目物体及び前記操作物体の形状及び相対位置関係が表現される仮想空間において前記注目物体とそれぞれの前記粒子が表す位置姿勢で配置された前記操作物体とが接触している状態に近いほど対応する前記粒子の重みを大きくし、
各前記粒子の重みが調整された前記粒子の集合に基づいて推定された前記操作物体の位置姿勢である推定状態を算出し、
前記操作物体を前記推定状態から前記今回移動の目標状態に移動させるための行動を計画し、
計画された前記行動の実行を前記ロボットに指令し、
前記目標状態の設定、前記観測結果の取得、前記粒子の集合の設定、前記粒子の集合の調整、前記推定状態の算出、前記行動の計画、及び前記行動の実行を、前記推定状態が前記完了状態に所定の誤差内で一致するまで繰り返させる、
ロボット制御装置。
操作対象である操作物体が環境中に位置する注目物体から離れている状態から前記操作物体が前記注目物体に特定の態様で接触している状態である完了状態に遷移するように前記操作物体を操作するロボットを制御するためのロボット制御処理を実行するようにコンピュータによって実行可能なプログラムを記憶した非一時的記憶媒体であって、
前記ロボット制御処理は、
前記操作物体の前記完了状態に至るまでの移動の途中の目標状態である中間目標状態又は前記完了状態を今回移動の目標状態として設定し、
前記操作物体の位置姿勢及び前記操作物体と前記注目物体との接触の有無についてのセンサによる観測結果を取得し、
前記操作物体の位置姿勢の不確定さを表す粒子の集合であって、前記粒子の集合に含まれるそれぞれの前記粒子は前記操作物体のありうる位置姿勢の一つを表す、前記粒子の集合を設定し、
前記粒子のうち前記観測結果が示す前記操作物体の位置姿勢に近い位置姿勢を表す粒子ほど粒子の重みを大きくし、かつ、前記観測結果が前記接触が生じていることを示す場合には前記注目物体及び前記操作物体の形状及び相対位置関係が表現される仮想空間において前記注目物体とそれぞれの前記粒子が表す位置姿勢で配置された前記操作物体とが接触している状態に近いほど対応する前記粒子の重みを大きくし、
各前記粒子の重みが調整された前記粒子の集合に基づいて推定された前記操作物体の位置姿勢である推定状態を算出し、
前記操作物体を前記推定状態から前記今回移動の目標状態に移動させるための行動を計画し、
計画された前記行動の実行を前記ロボットに指令し、
前記目標状態の設定、前記観測結果の取得、前記粒子の集合の設定、前記粒子の集合の調整、前記推定状態の算出、前記行動の計画、及び前記行動の実行を、前記推定状態が前記完了状態に所定の誤差内で一致するまで繰り返させる、
非一時的記憶媒体。
Claims (7)
- 操作対象である操作物体が環境中に位置する注目物体から離れている状態から前記操作物体が前記注目物体に特定の態様で接触している状態である完了状態に遷移するように前記操作物体を操作するロボットを制御するロボット制御装置であって、
前記操作物体の前記完了状態に至るまでの移動の途中の目標状態である中間目標状態又は前記完了状態を今回移動の目標状態として設定する目標状態設定部と、
前記操作物体の位置姿勢及び前記操作物体と前記注目物体との接触の有無についてのセンサによる観測結果を取得する観測部と、
前記操作物体の位置姿勢の不確定さを表す粒子の集合であって、前記粒子の集合に含まれるそれぞれの前記粒子は前記操作物体のありうる位置姿勢の一つを表す、前記粒子の集合を設定する粒子集合設定部と、
前記粒子のうち前記観測結果が示す前記操作物体の位置姿勢に近い位置姿勢を表す粒子ほど粒子の重みを大きくし、かつ、前記観測結果が前記接触が生じていることを示す場合には前記注目物体及び前記操作物体の形状及び相対位置関係が表現される仮想空間において前記注目物体とそれぞれの前記粒子が表す位置姿勢で配置された前記操作物体とが接触している状態に近いほど対応する前記粒子の重みを大きくする粒子集合調整部と、
各前記粒子の重みが調整された前記粒子の集合に基づいて推定された前記操作物体の位置姿勢である推定状態を算出する状態推定部と、
前記操作物体を前記推定状態から前記今回移動の目標状態に移動させるための行動を計画する行動計画部と、
計画された前記行動の実行を前記ロボットに指令する行動部と、
前記目標状態の設定、前記観測結果の取得、前記粒子の集合の設定、前記粒子の集合の調整、前記推定状態の算出、前記行動の計画、及び前記行動の実行を、前記推定状態が前記完了状態に所定の誤差内で一致するまで繰り返させる処理制御部と
を備えたロボット制御装置。 - さらに、前記完了状態及び前記完了状態に至るまでの一連の前記中間目標状態を含むガイドを設定するガイド設定部を備え、
前記目標状態設定部は、前記完了状態に対して遠い方から近い方に向かって前記ガイド中の前記中間目標状態を前記今回移動の目標状態として順次設定し、前記完了状態を最後の前記今回移動の目標状態として設定する
請求項1に記載のロボット制御装置。 - 前記完了状態、前記中間目標状態、前記粒子が表す前記操作物体の位置姿勢、前記推定状態、及び前記行動は、前記注目物体を基準とする相対座標系である物体座標系によって表され、
前記仮想空間は、前記物体座標系により記述される空間であり、
前記観測部は、前記センサにより検出された前記操作物体の位置姿勢を前記物体座標系によって表される観測結果として提供し、
前記行動部は、前記行動を前記ロボットが実行できる指令に変換して出力する
請求項1又は請求項2に記載のロボット制御装置。 - 前記操作物体は、前記ロボットのエンドエフェクタによって操作される物体である
請求項1~請求項3の何れか1項に記載のロボット制御装置。 - 前記操作物体は、前記ロボットのアームに取り付けられているエンドエフェクタであり、前記注目物体は、前記完了状態において前記エンドエフェクタによる保持が完了する保持対象物体である
請求項1~請求項3の何れか1項に記載のロボット制御装置。 - 操作対象である操作物体が環境中に位置する注目物体から離れている状態から前記操作物体が前記注目物体に特定の態様で接触している状態である完了状態に遷移するように前記操作物体を操作するロボットを制御するロボット制御方法であって、
前記操作物体の前記完了状態に至るまでの移動の途中の目標状態である中間目標状態又は前記完了状態を今回移動の目標状態として設定し、
前記操作物体の位置姿勢及び前記操作物体と前記注目物体との接触の有無についてのセンサによる観測結果を取得し、
前記操作物体の位置姿勢の不確定さを表す粒子の集合であって、前記粒子の集合に含まれるそれぞれの前記粒子は前記操作物体のありうる位置姿勢の一つを表す、前記粒子の集合を設定し、
前記粒子のうち前記観測結果が示す前記操作物体の位置姿勢に近い位置姿勢を表す粒子ほど粒子の重みを大きくし、かつ、前記観測結果が前記接触が生じていることを示す場合には前記注目物体及び前記操作物体の形状及び相対位置関係が表現される仮想空間において前記注目物体とそれぞれの前記粒子が表す位置姿勢で配置された前記操作物体とが接触している状態に近いほど対応する前記粒子の重みを大きくし、
各前記粒子の重みが調整された前記粒子の集合に基づいて推定された前記操作物体の位置姿勢である推定状態を算出し、
前記操作物体を前記推定状態から前記今回移動の目標状態に移動させるための行動を計画し、
計画された前記行動の実行を前記ロボットに指令し、
前記目標状態の設定、前記観測結果の取得、前記粒子の集合の設定、前記粒子の集合の調整、前記推定状態の算出、前記行動の計画、及び前記行動の実行を、前記推定状態が前記完了状態に所定の誤差内で一致するまで繰り返させる
処理をコンピュータに実行させるロボット制御方法。 - 操作対象である操作物体が環境中に位置する注目物体から離れている状態から前記操作物体が前記注目物体に特定の態様で接触している状態である完了状態に遷移するように前記操作物体を操作するロボットを制御するためのロボット制御プログラムであって、
前記操作物体の前記完了状態に至るまでの移動の途中の目標状態である中間目標状態又は前記完了状態を今回移動の目標状態として設定し、
前記操作物体の位置姿勢及び前記操作物体と前記注目物体との接触の有無についてのセンサによる観測結果を取得し、
前記操作物体の位置姿勢の不確定さを表す粒子の集合であって、前記粒子の集合に含まれるそれぞれの前記粒子は前記操作物体のありうる位置姿勢の一つを表す、前記粒子の集合を設定し、
前記粒子のうち前記観測結果が示す前記操作物体の位置姿勢に近い位置姿勢を表す粒子ほど粒子の重みを大きくし、かつ、前記観測結果が前記接触が生じていることを示す場合には前記注目物体及び前記操作物体の形状及び相対位置関係が表現される仮想空間において前記注目物体とそれぞれの前記粒子が表す位置姿勢で配置された前記操作物体とが接触している状態に近いほど対応する前記粒子の重みを大きくし、
各前記粒子の重みが調整された前記粒子の集合に基づいて推定された前記操作物体の位置姿勢である推定状態を算出し、
前記操作物体を前記推定状態から前記今回移動の目標状態に移動させるための行動を計画し、
計画された前記行動の実行を前記ロボットに指令し、
前記目標状態の設定、前記観測結果の取得、前記粒子の集合の設定、前記粒子の集合の調整、前記推定状態の算出、前記行動の計画、及び前記行動の実行を、前記推定状態が前記完了状態に所定の誤差内で一致するまで繰り返させる
処理をコンピュータに実行させるためのロボット制御プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/274,194 US20240100698A1 (en) | 2021-02-05 | 2022-01-21 | Robot Control Device, Robot Control Method, and Robot Control Program |
CN202280011668.9A CN116829313A (zh) | 2021-02-05 | 2022-01-21 | 机器人控制装置、机器人控制方法以及机器人控制程序 |
EP22749509.0A EP4289564A1 (en) | 2021-02-05 | 2022-01-21 | Robot control device, robot control method, and robot control program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021017687A JP2022120650A (ja) | 2021-02-05 | 2021-02-05 | ロボット制御装置、ロボット制御方法、及びロボット制御プログラム |
JP2021-017687 | 2021-02-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022168634A1 true WO2022168634A1 (ja) | 2022-08-11 |
Family
ID=82741747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/002177 WO2022168634A1 (ja) | 2021-02-05 | 2022-01-21 | ロボット制御装置、ロボット制御方法、及びロボット制御プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240100698A1 (ja) |
EP (1) | EP4289564A1 (ja) |
JP (1) | JP2022120650A (ja) |
CN (1) | CN116829313A (ja) |
WO (1) | WO2022168634A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009169581A (ja) * | 2008-01-15 | 2009-07-30 | Toyota Motor Corp | 移動体、移動体システム、及びその故障診断方法 |
JP2013024864A (ja) * | 2011-07-15 | 2013-02-04 | Mitsubishi Electric Corp | 複数の姿勢においてプローブを用いて物体をプロービングすることによってプローブを物体とレジストレーションする方法およびシステム |
JP2016528483A (ja) * | 2013-06-11 | 2016-09-15 | ソマティス センサー ソリューションズ エルエルシー | 物体を検知するシステム及び方法 |
JP2020011339A (ja) * | 2018-07-18 | 2020-01-23 | キヤノン株式会社 | ロボットシステムの制御方法、およびロボットシステム |
JP2021017687A (ja) | 2019-07-17 | 2021-02-15 | 中日本高速道路株式会社 | 床版取替工法 |
-
2021
- 2021-02-05 JP JP2021017687A patent/JP2022120650A/ja active Pending
-
2022
- 2022-01-21 EP EP22749509.0A patent/EP4289564A1/en active Pending
- 2022-01-21 WO PCT/JP2022/002177 patent/WO2022168634A1/ja active Application Filing
- 2022-01-21 CN CN202280011668.9A patent/CN116829313A/zh active Pending
- 2022-01-21 US US18/274,194 patent/US20240100698A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009169581A (ja) * | 2008-01-15 | 2009-07-30 | Toyota Motor Corp | 移動体、移動体システム、及びその故障診断方法 |
JP2013024864A (ja) * | 2011-07-15 | 2013-02-04 | Mitsubishi Electric Corp | 複数の姿勢においてプローブを用いて物体をプロービングすることによってプローブを物体とレジストレーションする方法およびシステム |
JP2016528483A (ja) * | 2013-06-11 | 2016-09-15 | ソマティス センサー ソリューションズ エルエルシー | 物体を検知するシステム及び方法 |
JP2020011339A (ja) * | 2018-07-18 | 2020-01-23 | キヤノン株式会社 | ロボットシステムの制御方法、およびロボットシステム |
JP2021017687A (ja) | 2019-07-17 | 2021-02-15 | 中日本高速道路株式会社 | 床版取替工法 |
Non-Patent Citations (1)
Title |
---|
DMITRY KALASHNIKOV ET AL.: "QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation", ARXIV PREPRINT ARXIV: 1806.10293, 2018 |
Also Published As
Publication number | Publication date |
---|---|
US20240100698A1 (en) | 2024-03-28 |
CN116829313A (zh) | 2023-09-29 |
JP2022120650A (ja) | 2022-08-18 |
EP4289564A1 (en) | 2023-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Asfour et al. | Armar-6: A high-performance humanoid for human-robot collaboration in real-world scenarios | |
CN109202942B (zh) | 手控制装置、手控制方法以及手的模拟装置 | |
US9862090B2 (en) | Surrogate: a body-dexterous mobile manipulation robot with a tracked base | |
Kofman et al. | Teleoperation of a robot manipulator using a vision-based human-robot interface | |
CN110076772B (zh) | 一种机械臂的抓取方法及装置 | |
JP5114019B2 (ja) | エフェクタの軌道を制御するための方法 | |
JP5580850B2 (ja) | シリアルロボットのための迅速な把持接触計算 | |
JP2019014030A (ja) | ロボットの制御装置、ロボット、ロボットシステム、並びに、カメラの校正方法 | |
JP7295421B2 (ja) | 制御装置及び制御方法 | |
CN114516060A (zh) | 用于控制机器人装置的设备和方法 | |
Shahverdi et al. | A simple and fast geometric kinematic solution for imitation of human arms by a NAO humanoid robot | |
Waltersson et al. | Planning and control for cable-routing with dual-arm robot | |
Ajwad et al. | Emerging trends in robotics–a review from applications perspective | |
Grasshoff et al. | 7dof hand and arm tracking for teleoperation of anthropomorphic robots | |
WO2022168634A1 (ja) | ロボット制御装置、ロボット制御方法、及びロボット制御プログラム | |
Shauri et al. | Sensor integration and fusion for autonomous screwing task by dual-manipulator hand robot | |
JP7159525B2 (ja) | ロボット制御装置、学習装置、及びロボット制御システム | |
WO2022168609A1 (ja) | 制御システム、動作計画装置、制御装置、動作計画及び制御方法、動作計画方法、並びに制御方法 | |
CN114080304A (zh) | 控制装置、控制方法及控制程序 | |
Lane et al. | Aspects of the design and development of a subsea dextrous grasping system | |
Asfour et al. | Armar-6 | |
US20220301209A1 (en) | Device and method for training a neural network for controlling a robot | |
Nazarova et al. | HyperPalm: DNN-based hand gesture recognition interface for intelligent communication with quadruped robot in 3D space | |
US11921492B2 (en) | Transfer between tasks in different domains | |
US20230311331A1 (en) | Device and method for controlling a robot to perform a task |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22749509 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18274194 Country of ref document: US Ref document number: 202280011668.9 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022749509 Country of ref document: EP Effective date: 20230905 |