WO2024134802A1 - Control device, control system, control method, and recording medium - Google Patents

Control device, control system, control method, and recording medium Download PDF

Info

Publication number
WO2024134802A1
WO2024134802A1 PCT/JP2022/047093 JP2022047093W WO2024134802A1 WO 2024134802 A1 WO2024134802 A1 WO 2024134802A1 JP 2022047093 W JP2022047093 W JP 2022047093W WO 2024134802 A1 WO2024134802 A1 WO 2024134802A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
operation plan
unit
state
granularity
Prior art date
Application number
PCT/JP2022/047093
Other languages
French (fr)
Japanese (ja)
Inventor
啄茉 和田
真澄 一圓
達彦 中林
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to PCT/JP2022/047093 priority Critical patent/WO2024134802A1/en
Publication of WO2024134802A1 publication Critical patent/WO2024134802A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators

Definitions

  • This disclosure relates to a control device, a control system, a control method, and a recording medium.
  • Robots are used in a variety of fields, including logistics.
  • control signals are sent over a communication network to control robots and other such devices.
  • Patent Document 1 discloses a related technology, which is a system that stops operations or adjusts the speed based on delays.
  • Patent Document 1 can stop an action or adjust the speed based on the delay, but it does not determine the action by taking into account the action plan. Therefore, it is difficult to apply this technology to control that determines the action by taking into account the action plan.
  • Each aspect of the present disclosure has as one of its objectives the provision of a control device, control system, control method, and recording medium that can solve the above problems.
  • the control device includes a monitoring means for monitoring the state of a communication network that transmits a control signal for controlling the control object, which is generated based on an operation plan for the control object, and a determination means for determining the control amount of the control object and the granularity of the operation plan based on the state.
  • a control system includes the above control device and an object controlled by the control device.
  • a control method monitors the state of a communication network that transmits a control signal for controlling a control object that is generated based on an operation plan of the control object, and determines the control amount of the control object and the granularity of the operation plan based on the state.
  • a recording medium stores a program that causes a computer to execute the following operations: monitoring the state of a communication network that transmits a control signal for controlling a control object that is generated based on an operation plan for the control object; and determining the control amount of the control object and the granularity of the operation plan based on the state.
  • delays in communication can be appropriately reflected in control that determines actions by taking into account an action plan.
  • FIG. 1 is a diagram illustrating an example of a configuration of a control system according to an embodiment of the present disclosure.
  • FIG. 13 is a diagram illustrating an example of a timing at which a planning unit generates an operation plan according to an embodiment of the present disclosure.
  • FIG. 13 is a diagram illustrating a first example of the granularity of an operation plan generated by a planning unit according to an embodiment of the present disclosure.
  • FIG. 13 is a diagram illustrating a second example of the granularity of an operation plan generated by the planner according to an embodiment of the present disclosure.
  • 11 is a diagram illustrating an example of a sequence TBL1 of an operation plan generated by a planner according to an embodiment of the present disclosure.
  • FIG. 13 is a diagram illustrating an example of a timing at which a planning unit generates an operation plan according to an embodiment of the present disclosure.
  • FIG. 13 is a diagram illustrating a first example of the granularity of an operation plan generated by a planning unit according to an embodiment of the
  • FIG. 4 is a diagram illustrating an example of a control signal Cnt generated by a control unit according to an embodiment of the present disclosure.
  • FIG. FIG. 2 is a diagram illustrating an example of a processing flow of a control system according to an embodiment of the present disclosure.
  • FIG. 2 illustrates an example of a control device with a minimum configuration according to an embodiment of the present disclosure.
  • FIG. 11 is a diagram illustrating an example of a processing flow of a control device with a minimum configuration according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic block diagram illustrating a configuration of a computer according to at least one embodiment.
  • the control system 1 is a system that remotely controls a control target via a communication network NW, and is a system that creates an operation plan for the control target taking into account delays in the communication network NW.
  • the control target includes a robot
  • the operation plan is a robot operation plan generated when the robot moves an object M to a destination.
  • destinations include, for example, cardboard boxes for packaging the object M at the time of shipment, trays for sorting the object M at the time of arrival, and positions for reading barcodes attached to the object M at the time of arrival and departure.
  • the control system 1 does not limit the control target to a robot.
  • the control system 1 may be any control target as long as it operates in response to a control signal transmitted via a communication network.
  • Fig. 1 is a diagram illustrating an example of a configuration of a control system 1 according to an embodiment of the present disclosure.
  • the control system 1 includes a control device 10, a control target device 20, and a storage unit 106.
  • the control device 10, the control target device 20, and the storage unit 106 are connected via a communication network NW.
  • the control device 10 includes an input unit 101, a recognition unit 102, a measurement unit 103 (an example of a monitoring means), a planning unit 104 (an example of a decision means), and one or more control units 105 (an example of a generation means).
  • the control device 10 may also include a storage unit 106. Examples of the control device 10 include an edge server and a cloud server.
  • the input unit 101 inputs the task goal and the constraint conditions to the planning unit 104.
  • the task goal include information indicating the type of object M, the number of objects M to be moved, the origin of the object M, and the destination of the object M.
  • the constraint conditions include a no-entry area when moving the object M, an area that deviates from the range of motion of the robot 203 described later, and further, conditions on the surface of the object M regarding the gripping of the object M, the release of the gripping of the object M, or the transfer of the object M.
  • the input unit 101 may receive an input from the user as a task goal, for example, "move three parts A from the tray T to the cardboard box C," and specify that the type of the object M to be moved is part A, the number of objects M to be moved is three, the origin of the object M is the tray T, and the destination of the object M is the cardboard box C.
  • the input unit 101 may input the specified information to the planning unit 104.
  • the position of the object M identified in the image captured by the image capture device 201 may be set as the origin of the object M.
  • the input unit 101 may receive, for example, the positions of obstacles along the path of the object M from the origin to the destination from the user as constraint conditions indicating no-entry areas, and input the information to the planning unit 104.
  • a file indicating the constraint conditions may be stored in the storage unit 106, and the input unit 101 may input the constraint conditions indicated by the file to the planning unit 104, or the planning unit 104 may read the constraint conditions from the file, or both.
  • the planning unit 104 may acquire the necessary work goal and the necessary constraint conditions, any acquisition method may be used.
  • the recognition unit 102 acquires an image showing the environment captured by the image capturing device 201 (described later) from the controlled device 20.
  • the recognition unit 102 recognizes the environment shown in the image acquired from the image capturing device 201. Examples of the environment include the state (i.e., position and posture) of the object M at position P, the state of static objects other than the object M, and the state of dynamic objects.
  • the recognition unit 102 outputs information showing the recognized environment to the planning unit 104.
  • the measurement unit 103 measures the delay time of communication in the communication network NW. For example, the measurement unit 103 may measure the delay time by transmitting a dummy signal and then the time until an ACK (Acknowledgement) signal is returned, or by transmitting a PIN (Personal Identification Number). The measurement unit 103 outputs the measured delay time to the planning unit 104.
  • the measurement unit 103 may measure the delay time by transmitting a dummy signal and then the time until an ACK (Acknowledgement) signal is returned, or by transmitting a PIN (Personal Identification Number).
  • the measurement unit 103 outputs the measured delay time to the planning unit 104.
  • the planning unit 104 generates an operation plan indicating the flow of the robot 203's operation based on the delay time measured by the measurement unit 103, the work goal and constraint conditions input by the input unit 101, and information indicating the environment recognized by the recognition unit 102. For example, when the work goal and constraint conditions are input by the input unit 101, the planning unit 104 acquires information (images) indicating the environment recognized by the recognition unit 102. Specifically, for example, the planning unit 104 acquires an image of the source of the object M indicated by the work goal from the imaging device 201. The planning unit 104 can recognize the environment (for example, the state (i.e., position and posture) of the object M at the source) from the image acquired from the imaging device 201.
  • the environment for example, the state (i.e., position and posture) of the object M at the source
  • the planning unit 104 generates, for example, a movement path (part of the operation plan) including the state of the object M from the state of the object M at the source of the movement to the state of the object M at the destination, for example, by simulation.
  • the information representing the movement path is information necessary for the control unit 105 to generate a control signal for controlling the robot 203.
  • the planning unit 104 then generates information (i.e., a sequence (part of the motion plan)) indicating each state of the robot 203 at each time step during the movement (for example, the type (including the shape) of the object M, the position and posture of the robot 203, the motion of the robot 203 (the grip strength of the object M, etc.), etc.) by, for example, simulation.
  • the planning unit 104 outputs the generated sequence to the control unit 105.
  • the planning unit 104 may be realized using artificial intelligence (AI) technology such as temporal logic, reinforcement learning, and optimization technology.
  • AI artificial intelligence
  • FIG. 2 is a diagram showing an example of the timing at which the planning unit 104 generates an operation plan according to an embodiment of the present disclosure.
  • the horizontal direction in FIG. 2 represents the passage of time, with time moving to the right.
  • the vertical axis in the portion (a) of FIG. 2 represents the delay in communication performed through the communication network NW. The communication delay increases as the vertical axis increases.
  • the portion (b) of FIG. 2 represents the period during which the operation plan is performed and the period during which control is performed as time passes.
  • the planning unit 104 generates an operation plan each time control according to the operation plan is completed. When generating a new plan, the planning unit 104 generates an operation plan with a granularity determined based on the delay time in communication, for example.
  • Examples of the delay time in communication include past delay times such as a period going back a predetermined period from the timing at which a new operation plan is generated, a period going back from the timing at which a new operation plan is generated to the timing at which the previous operation plan was generated, and a period going back a predetermined number of times to the timing at which the operation plan was generated.
  • examples of the delay time in communication include predicted future delay times. Examples of delay times include delay times using statistical methods such as maximum values and average values (including average values calculated by moving averages), and delay times obtained by one measurement at a predetermined timing.
  • the measurement unit 103 may measure the communication delay time in the communication network NW in advance, and the planner 104 may use that delay time.
  • FIG. 3 is a diagram showing a first example of the granularity of a motion plan generated by the planning unit 104 according to an embodiment of the present disclosure.
  • FIG. 4 is a diagram showing a second example of the granularity of a motion plan generated by the planning unit 104 according to an embodiment of the present disclosure.
  • the granularity of a motion plan is the relationship between the states (position and posture) at each time, i.e., the time that indicates the fineness of the time step.
  • part (a) of Figure 3 and part (a) of Figure 4 the horizontal direction represents the passage of time, with time moving to the right.
  • the vertical axis in part (a) of Figure 3 and part (a) of Figure 4 represents the delay in communication carried out via the communication network NW. The higher up on the vertical axis, the greater the communication delay. Note that part (a) of Figure 3 represents an example where the delay time is small. Also, part (a) of Figure 4 represents an example where the delay time is large.
  • part (b) of FIG. 3 and part (b) of FIG. 4 show an image of the state of the object M in three-dimensional space at each predetermined timing defined in the motion plan.
  • the circles in part (b) of FIG. 3 and part (b) of FIG. 4 show the state of the object M at each predetermined time t (i.e., t1, t2, t3, ..., t12, ...) defined in the motion plan.
  • the spacing between the circles shown in part (b) of FIG. 4 is wider than the spacing between the circles shown in part (b) of FIG. 3. Therefore, this shows that the moving distance per unit time of the object M shown in part (b) of FIG. 4 is longer than the moving distance per unit time of the object M shown in part (b) of FIG. 3.
  • part (c) of FIG. 3 and part (c) of FIG. 4 show an image of the control amount in the control performed between adjacent predetermined timings defined in the operation plan.
  • the control amount is the amount that the part of the robot 203 that grasps the object M must move, including its posture, within the interval (hereinafter referred to as the "control interval") between the timings of performing the control (the timings of receiving the control command), and is the amount of change in the state of the object M within each control interval. For example, as shown in part (b) of FIG. 4, if the moving distance of the object M per unit time is long, the moving distance of the object M within each control interval is made long (i.e., the amount of change in the state of the object M is made large).
  • control amount shown in part (c) of FIG. 4 is greater than the control amount shown in part (c) of FIG. 3.
  • Part (b) of FIG. 3 shows the granularity of the operation plan when the delay time measured by the measurement unit 103 is small.
  • Part (b) of FIG. 4 shows the granularity of the operation plan when the delay time measured by the measurement unit 103 is large.
  • the magnitude of the communication delay may be determined by comparing multiple time transitions of the communication delay, or may be determined by determining whether the communication delay satisfies a criterion for determining that communication is delayed.
  • the communication delay may be the total value of the delay time at a certain time, or may be the delay time at a certain timing.
  • the planning unit 104 generates an operation plan with a granularity according to the delay time in the communication network NW. Specifically, the planning unit 104 generates an operation plan with a larger granularity as the delay time in the communication network NW increases. More specifically, the planning unit 104 generates an operation plan with the granularity of the operation plan set to, for example, five times the delay time. As described in the part (c) of FIG. 3 and the part (c) of FIG. 4, the granularity of the operation plan and the control amount are linked. That is, as the granularity of the operation plan increases, the control amount increases. Information on this control amount is included in the operation plan.
  • the planning unit 104 first determines the granularity of the operation plan according to the delay time. Next, the planning unit 104 determines the state of the robot for each time corresponding to the granularity. Then, the planning unit 104 generates an operation plan for controlling the controlled device 20 with a control amount smaller than the granularity (i.e., a short time interval).
  • the control unit 105 which will be described later, reflects such control amounts to generate a control signal, enabling the control device 10 to perform reactive control of the controlled device 20.
  • Reactive control refers to control that is performed sequentially at short time intervals. This reactive control makes it possible to avoid collisions with obstacles that move suddenly and approach.
  • FIG. 5 is a diagram showing an example of a sequence TBL1 of an operation plan generated by the planning unit 104 according to an embodiment of the present disclosure.
  • the sequence TBL1 of an operation plan generated by the planning unit 104 is a sequence showing each state of the robot 203 (described later) for each n time steps from the origin of movement of the object M to the destination.
  • the control unit 105 generates a control signal for controlling a control target (in this example, a robot 203) in the controlled device 20. For example, when moving an object M to a destination, the control unit 105 moves the object M to a position where the object M is recognized (hereinafter referred to as the "recognition position"), and generates a control signal for moving the object M from the recognition position to the destination.
  • a control target in this example, a robot 203
  • the control unit 105 moves the object M to a position where the object M is recognized (hereinafter referred to as the "recognition position"), and generates a control signal for moving the object M from the recognition position to the destination.
  • control unit 105 generates a control signal for controlling the robot 203 based on the sequence output by the planning unit 104 and information on each joint angle of the robot 203 transmitted from the controller 202 described below.
  • the control signal includes a signal for controlling each joint angle of the robot 203.
  • the sequence includes a control amount. Therefore, the control unit 105 generates a control signal that reflects the control amount. In other words, the control unit 105 generates a control signal for adjusting the joint angle at a time interval shorter than the time step so as to approach the target posture from the current joint angle.
  • the control unit 105 may generate a control signal that optimizes an evaluation function when generating the control signal.
  • the evaluation function include a function that represents the amount of energy consumed by the robot 203 when moving the object M, and a function that represents the distance along the path along which the object M is moved.
  • the control unit 105 transmits the generated control signal to the controlled device 20.
  • FIG. 6 is a diagram showing an example of a control signal Cnt generated by the control unit 105 according to an embodiment of the present disclosure.
  • the control signal Cnt of the initial plan generated by the control unit 105 is each control signal and control amount for each time step from the origin of movement of the object M to the destination, as shown in FIG. 6.
  • the storage unit 106 stores various information necessary for the processing performed by the control device 10. For example, the storage unit 106 stores a file indicating constraint conditions, a sequence TBL1, a control signal Cnt for each time step, and a control amount.
  • the controlled device 20 includes an image capture device 201, one or more controllers 202 (corresponding to the control unit 105), and a robot 203 controlled by the controller 202.
  • the controller 202 may control multiple robots 203.
  • the controlled device 20 is a device that includes a robot 203 that is to be controlled by the control device 10.
  • the imaging device 201 captures images of the environment around the robot 203, including the state of the object M.
  • the imaging device 201 is, for example, an industrial camera, and is capable of identifying the state (i.e., position and posture) of the object M.
  • the image captured by the imaging device 201 is transmitted to the control device 10 together with time information such as a timestamp.
  • the controller 202 controls the robot in response to control signals sent from the control device 10.
  • the controller 202 also acquires information on each joint angle of the robot 203 that has operated in response to the control.
  • the controller 202 then transmits the acquired information on each joint angle to the control device 10 together with time information such as a timestamp.
  • the robot 203 operates according to the control of the corresponding controller 202. For example, if the control signal transmitted from the control device 10 to the controlled device 20 is a control signal for moving the object M from state A to state B, the robot 203, under the control of the controller 202, performs an operation of grasping the object M and moving the object M from state A to state B.
  • the control device 10 can know when the information transmitted along with the time information was transmitted, and can maintain a certain level of accuracy in the control signal it generates.
  • FIG. 7 is a diagram showing an example of a processing flow of the control system 1 according to an embodiment of the present disclosure. Next, details of the processing performed by the control device 10 in the control system 1 will be described with reference to Fig. 7. It is assumed that the control device 10 receives information indicating the environment photographed by the imaging device 201 from the control target device 20.
  • the input unit 101 inputs the work goal and constraint conditions to the planning unit 104 (step S1).
  • the recognition unit 102 acquires an image showing the environment captured by the imaging device 201 from the controlled device 20.
  • the recognition unit 102 recognizes the environment shown in the image acquired from the imaging device 201 (step S2).
  • the recognition unit 102 outputs information showing the recognized environment to the planning unit 104.
  • the measurement unit 103 measures the communication delay time in the communication network NW (step S3).
  • the measurement unit 103 outputs the measured delay time to the planning unit 104.
  • the planning unit 104 generates an operation plan indicating the flow of operation of the robot 203 based on the delay time measured by the measurement unit 103, the task goal and constraint conditions input by the input unit 101, and information indicating the environment recognized by the recognition unit 102. For example, the planning unit 104 generates an operation plan with a granularity according to the delay time in the communication network NW (step S4). Specifically, the planning unit 104 generates an operation plan with a larger granularity as the delay time in the communication network NW increases. More specifically, the planning unit 104 generates an operation plan with a granularity of, for example, five times the delay time. The planning unit 104 outputs the generated sequence to the control unit 105.
  • the control unit 105 generates a control signal for controlling the robot 203, which is the control target in the controlled device 20, based on the operation plan (step S5). For example, when moving the object M to a destination, the control unit 105 generates a control signal for moving the object M to a recognition position where the object M is recognized, and for moving the object M from the recognition position to the destination.
  • control unit 105 generates a control signal for controlling the robot 203 at a time interval shorter than the time step, based on the sequence output by the planning unit 104 and information on each joint angle of the robot 203 transmitted from the controller 202.
  • the control unit 105 transmits the generated control signal to the controlled device 20 (step S6).
  • the control device 10 includes a measurement unit 103 (an example of a monitoring means for monitoring a state) that measures a communication delay time in a communication network NW that transmits a control signal for controlling a robot 203 (an example of a control target) that is generated based on an operation plan of the robot 203, and a planning unit 104 (an example of a determining means) that determines a control amount of the robot 203 and a granularity of the operation plan based on the delay time.
  • a measurement unit 103 an example of a monitoring means for monitoring a state
  • NW a communication delay time
  • NW a communication network NW
  • a robot 203 an example of a control target
  • a planning unit 104 an example of a determining means
  • control system 1 can appropriately reflect communication delays in the control that determines the operation taking into account the operation plan.
  • the measurement unit 103 has been described as measuring the delay time of communication in the communication network NW.
  • the measurement unit 103 may measure the amount of data of communication in the communication network NW, identify the corresponding delay time from the measured amount of data, and use the identified delay time in place of the delay time in an embodiment of the present disclosure.
  • FIG. 8 is a diagram showing an example of a minimum configuration of the control device 10 according to an embodiment of the present disclosure.
  • the minimum configuration of the control device 10 includes a measurement unit 103 (an example of a monitoring means) and a planning unit 104 (an example of a determination means).
  • the measurement unit 103 measures the communication delay time in the communication network NW that transmits a control signal for controlling the robot 203 (an example of a control target) that is generated based on the operation plan of the robot 203 (an example of a control target) (an example of monitoring the state).
  • the measurement unit 103 can be realized, for example, by using the function of the measurement unit 103 illustrated in FIG. 1.
  • the planner 104 determines the control amount of the robot 203 and the granularity of the operation plan based on the delay time.
  • the planner 104 can be realized, for example, by using the function of the planner 104 illustrated in FIG. 1.
  • FIG. 9 is a diagram showing an example of a processing flow of the control device 10 with the minimum configuration according to an embodiment of the present disclosure.
  • the processing of the control device 10 with the minimum configuration will be described with reference to FIG. 9.
  • the measurement unit 103 measures the communication delay time in the communication network NW that transmits the control signal for controlling the robot 203, which is generated based on the motion plan of the robot 203 (step S101).
  • the planning unit 104 determines the control amount of the robot 203 and the granularity of the motion plan based on the delay time (step S102).
  • control device 10 with a minimum configuration according to an embodiment of the present disclosure.
  • This control device 10 can appropriately reflect communication delays in control that determines operations by taking into account an operation plan.
  • control system 1, control device 10, controlled device 20, and other control devices may have a computer device inside.
  • the above-mentioned process steps are stored in the form of a program on a computer-readable recording medium, and the above-mentioned processes are performed by the computer reading and executing this program. Specific examples of computers are shown below.
  • FIG. 10 is a schematic block diagram showing the configuration of a computer according to at least one embodiment.
  • the computer 5 includes a CPU (Central Processing Unit) 6, a main memory 7, a storage 8, and an interface 9.
  • the above-mentioned control system 1, control device 10, controlled device 20, and other control devices are each implemented in the computer 5.
  • the operation of each of the above-mentioned processing units is stored in the storage 8 in the form of a program.
  • the CPU 6 reads the program from the storage 8 and expands it in the main memory 7, and executes the above-mentioned processing according to the program.
  • the CPU 6 also secures storage areas in the main memory 7 corresponding to each of the above-mentioned storage units according to the program.
  • storage 8 examples include HDD (Hard Disk Drive), SSD (Solid State Drive), magnetic disk, magneto-optical disk, CD-ROM (Compact Disc Read Only Memory), DVD-ROM (Digital Versatile Disc Read Only Memory), semiconductor memory, etc.
  • Storage 8 may be an internal medium directly connected to the bus of computer 5, or an external medium connected to computer 5 via interface 9 or a communication line.
  • computer 5 when this program is distributed to computer 5 via a communication line, computer 5 that receives the program may expand the program in main memory 7 and execute the above process.
  • storage 8 is a non-transitory tangible storage medium.
  • the program may also realize some of the functions described above.
  • the program may be a file that can realize the functions described above in combination with a program already recorded in the computer device, a so-called differential file (differential program).
  • a control device comprising:
  • (Appendix 2) a generating means for generating the control signal based on the control amount and the granularity of the operation plan determined by the determining means; 2.
  • control amount is smaller than the granularity of the motion plan; 3.
  • the control target is a robot. 4. The control device according to claim 1 ,
  • a control device according to any one of claims 1 to 4, A control target of the control device;
  • a control system comprising:
  • Appendix 6 monitor a state of a communication network that transmits a control signal for controlling the control object, the control signal being generated based on an operation plan for the control object; determining a control amount of the control target and a granularity of the operation plan based on the state; Control methods.
  • delays in communication can be appropriately reflected in control that determines actions by taking into account an action plan.
  • Control system 5 Computer 6: CPU 7... Main memory 8... Storage 9... Interface 10
  • Control device 20 Control target device 101... Input unit 102
  • Recognition unit 103 Recognition unit 103
  • Measurement unit 104 ... Planning unit 105...
  • Control unit 201 Imaging device 202
  • Controller 203 Robot

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

This control device comprises: a monitoring means that monitors the state of a communication network via which a control signal for controlling a control object is transmitted, the control signal being generated on the basis of an operation plan for the control object; and a determination means that, on the basis of the state, determines a control quantity pertaining to the control object and determines the granularity of the operation plan.

Description

制御装置、制御システム、制御方法、および記録媒体CONTROL DEVICE, CONTROL SYSTEM, CONTROL METHOD, AND RECORDING MEDIUM
 本開示は、制御装置、制御システム、制御方法、および記録媒体に関する。 This disclosure relates to a control device, a control system, a control method, and a recording medium.
 物流などさまざまな分野でロボットが利用されている。ロボットなどの制御では、通信ネットワークを介して制御信号が送信されるものがある。特許文献1には、関連する技術として、遅延に基づいて動作の停止や速度を調整するシステムに関する技術が開示されている。 Robots are used in a variety of fields, including logistics. In some cases, control signals are sent over a communication network to control robots and other such devices. Patent Document 1 discloses a related technology, which is a system that stops operations or adjusts the speed based on delays.
特表2021-524298号公報Special Publication No. 2021-524298
 特許文献1に記載の技術では、遅延に基づいて動作の停止や速度を調整することはできるが、動作計画を考慮して動作を決定するものではない。そのため、動作計画を考慮して動作を決定する制御に適用することは困難である。 The technology described in Patent Document 1 can stop an action or adjust the speed based on the delay, but it does not determine the action by taking into account the action plan. Therefore, it is difficult to apply this technology to control that determines the action by taking into account the action plan.
 本開示の各態様は、上記の課題を解決することのできる制御装置、制御システム、制御方法、および記録媒体を提供することを目的の1つとしている。 Each aspect of the present disclosure has as one of its objectives the provision of a control device, control system, control method, and recording medium that can solve the above problems.
 上記目的を達成するために、本開示の一態様によれば、制御装置は、制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視する監視手段と、前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定する決定手段と、を備える。 In order to achieve the above object, according to one aspect of the present disclosure, the control device includes a monitoring means for monitoring the state of a communication network that transmits a control signal for controlling the control object, which is generated based on an operation plan for the control object, and a determination means for determining the control amount of the control object and the granularity of the operation plan based on the state.
 上記目的を達成するために、本開示の別の態様によれば、制御システムは、上記の制御装置と、前記制御装置の制御対象と、を備える。 In order to achieve the above object, according to another aspect of the present disclosure, a control system includes the above control device and an object controlled by the control device.
 上記目的を達成するために、本開示の別の態様によれば、制御方法は、制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視し、前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定する。 In order to achieve the above object, according to another aspect of the present disclosure, a control method monitors the state of a communication network that transmits a control signal for controlling a control object that is generated based on an operation plan of the control object, and determines the control amount of the control object and the granularity of the operation plan based on the state.
 上記目的を達成するために、本開示の別の態様によれば、記録媒体は、制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視することと、前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定することと、をコンピュータに実行させるプログラムを格納している。 In order to achieve the above object, according to another aspect of the present disclosure, a recording medium stores a program that causes a computer to execute the following operations: monitoring the state of a communication network that transmits a control signal for controlling a control object that is generated based on an operation plan for the control object; and determining the control amount of the control object and the granularity of the operation plan based on the state.
 本開示の各態様によれば、動作計画を考慮して動作を決定する制御において、通信における遅延を適切に反映させることができる。 According to each aspect of the present disclosure, delays in communication can be appropriately reflected in control that determines actions by taking into account an action plan.
本開示の一実施形態による制御システムの構成の一例を示す図である。FIG. 1 is a diagram illustrating an example of a configuration of a control system according to an embodiment of the present disclosure. 本開示の一実施形態のより計画部が動作計画を生成するタイミングの一例を示す図である。FIG. 13 is a diagram illustrating an example of a timing at which a planning unit generates an operation plan according to an embodiment of the present disclosure. 本開示の一実施形態による計画部が生成する動作計画の粒度の第1の例を示す図である。FIG. 13 is a diagram illustrating a first example of the granularity of an operation plan generated by a planning unit according to an embodiment of the present disclosure. 本開示の一実施形態による計画部が生成する動作計画の粒度の第2の例を示す図である。FIG. 13 is a diagram illustrating a second example of the granularity of an operation plan generated by the planner according to an embodiment of the present disclosure. 本開示の一実施形態による計画部が生成する動作計画のシーケンスTBL1の一例を示す図である。11 is a diagram illustrating an example of a sequence TBL1 of an operation plan generated by a planner according to an embodiment of the present disclosure. FIG. 本開示の一実施形態による制御部が生成する制御信号Cntの一例を示す図である。4 is a diagram illustrating an example of a control signal Cnt generated by a control unit according to an embodiment of the present disclosure. FIG. 本開示の一実施形態による制御システムの処理フローの一例を示す図である。FIG. 2 is a diagram illustrating an example of a processing flow of a control system according to an embodiment of the present disclosure. 本開示の実施形態による最小構成の制御装置の一例を示す図である。FIG. 2 illustrates an example of a control device with a minimum configuration according to an embodiment of the present disclosure. 本開示の実施形態による最小構成の制御装置の処理フローの一例を示す図である。FIG. 11 is a diagram illustrating an example of a processing flow of a control device with a minimum configuration according to an embodiment of the present disclosure. 少なくとも1つの実施形態に係るコンピュータの構成を示す概略ブロック図である。FIG. 1 is a schematic block diagram illustrating a configuration of a computer according to at least one embodiment.
 以下、図面を参照しながら実施形態について詳しく説明する。
<実施形態>
 本開示の一実施形態による制御システム1は、通信ネットワークNWを介して制御対象を遠隔操作するシステムであり、通信ネットワークNWにおける遅延を考慮して制御対象の動作計画を立てるシステムである。本開示の一実施形態における具体例としては、制御対象はロボットを含み、動作計画はロボットが対象物Mを目的地に移動させる際に生成するロボットの動作計画とする。目的地の例としては、例えば、出荷時に対象物Mを梱包するための段ボール、入荷時に対象物Mを仕分けするためのトレイ、出入荷時に対象物Mに付与されたバーコードを読み取るための位置などが挙げられる。ただし、制御システム1は、制御対象をロボットに限定するものではない。制御システム1は、通信ネットワークを介して送信される制御信号に応じて動作するものであれば、どのような制御対象であってもよい。
Hereinafter, the embodiments will be described in detail with reference to the drawings.
<Embodiment>
The control system 1 according to an embodiment of the present disclosure is a system that remotely controls a control target via a communication network NW, and is a system that creates an operation plan for the control target taking into account delays in the communication network NW. As a specific example in an embodiment of the present disclosure, the control target includes a robot, and the operation plan is a robot operation plan generated when the robot moves an object M to a destination. Examples of destinations include, for example, cardboard boxes for packaging the object M at the time of shipment, trays for sorting the object M at the time of arrival, and positions for reading barcodes attached to the object M at the time of arrival and departure. However, the control system 1 does not limit the control target to a robot. The control system 1 may be any control target as long as it operates in response to a control signal transmitted via a communication network.
(制御システムの構成)
 図1は、本開示の一実施形態による制御システム1の構成の一例を示す図である。制御システム1は、図1に示すように、制御装置10、制御対象装置20、および記憶部106を備える。なお、制御装置10、制御対象装置20、および記憶部106は、通信ネットワークNWを介して接続される。
(Control System Configuration)
Fig. 1 is a diagram illustrating an example of a configuration of a control system 1 according to an embodiment of the present disclosure. As shown in Fig. 1, the control system 1 includes a control device 10, a control target device 20, and a storage unit 106. The control device 10, the control target device 20, and the storage unit 106 are connected via a communication network NW.
 制御装置10は、図1に示すように、入力部101、認識部102、計測部103(監視手段の一例)、計画部104(決定手段の一例)、及び1つ以上の制御部105(生成手段の一例)を備える。制御装置10は、記憶部106を備えていてもよい。制御装置10の例としては、エッジサーバやクラウドサーバなどが挙げられる。 As shown in FIG. 1, the control device 10 includes an input unit 101, a recognition unit 102, a measurement unit 103 (an example of a monitoring means), a planning unit 104 (an example of a decision means), and one or more control units 105 (an example of a generation means). The control device 10 may also include a storage unit 106. Examples of the control device 10 include an edge server and a cloud server.
 入力部101は、作業目標および制約条件を計画部104に入力する。作業目標の例としては、対象物Mの種類、移動させる対象物Mの数量、対象物Mの移動元、および対象物Mの目的地を示す情報などが挙げられる。制約条件の例としては、対象物Mを移動させる際の進入禁止領域、後述するロボット203の可動域を逸脱する領域、更には、対象物Mの把持、対象物Mの把持の解除、または、対象物Mの持ち替えに関する対象物Mの面の条件などが挙げられる。なお、入力部101は、作業目標として、例えば「部品Aを3個、トレイTから段ボールCに移動させる」という入力をユーザから受け取り、移動対象の対象物Mの種類が部品A、移動させる対象物Mの数量が3個、対象物Mの移動元がトレイT、対象物Mの目的地が段ボールCであると特定するものであってもよい。入力部101は、その特定した情報を計画部104に入力するものであってもよい。また、撮影装置201が撮影した画像において特定した対象物Mの位置を対象物Mの移動元とするものであってもよい。また、入力部101は、例えば、対象物Mを移動元から目的地まで移動させる間の障害物の位置を、進入禁止領域を示す制約条件としてユーザから受け取り、その情報を計画部104に入力するものであってもよい。また、制約条件を示すファイルを記憶部106に記憶しておき、入力部101がそのファイルが示す制約条件を計画部104に入力する、または、計画部104がそのファイルから制約条件を読み取る、または、その両方であってもよい。つまり、計画部104が必要な作業目標および必要な制約条件を取得できる限り、その取得方法はどのようなものであってもよい。 The input unit 101 inputs the task goal and the constraint conditions to the planning unit 104. Examples of the task goal include information indicating the type of object M, the number of objects M to be moved, the origin of the object M, and the destination of the object M. Examples of the constraint conditions include a no-entry area when moving the object M, an area that deviates from the range of motion of the robot 203 described later, and further, conditions on the surface of the object M regarding the gripping of the object M, the release of the gripping of the object M, or the transfer of the object M. Note that the input unit 101 may receive an input from the user as a task goal, for example, "move three parts A from the tray T to the cardboard box C," and specify that the type of the object M to be moved is part A, the number of objects M to be moved is three, the origin of the object M is the tray T, and the destination of the object M is the cardboard box C. The input unit 101 may input the specified information to the planning unit 104. Also, the position of the object M identified in the image captured by the image capture device 201 may be set as the origin of the object M. Also, the input unit 101 may receive, for example, the positions of obstacles along the path of the object M from the origin to the destination from the user as constraint conditions indicating no-entry areas, and input the information to the planning unit 104. Also, a file indicating the constraint conditions may be stored in the storage unit 106, and the input unit 101 may input the constraint conditions indicated by the file to the planning unit 104, or the planning unit 104 may read the constraint conditions from the file, or both. In other words, as long as the planning unit 104 can acquire the necessary work goal and the necessary constraint conditions, any acquisition method may be used.
 認識部102は、後述する撮影装置201が撮影した環境を示す画像を制御対象装置20から取得する。認識部102は、撮影装置201から取得する画像が示す環境を認識する。環境の例としては、対象物Mの位置Pにおける状態(すなわち、位置および姿勢)、対象物M以外の静的な物体の状態、および動的な物体の状態などが含まれる。認識部102は、認識した環境を示す情報を計画部104に出力する。 The recognition unit 102 acquires an image showing the environment captured by the image capturing device 201 (described later) from the controlled device 20. The recognition unit 102 recognizes the environment shown in the image acquired from the image capturing device 201. Examples of the environment include the state (i.e., position and posture) of the object M at position P, the state of static objects other than the object M, and the state of dynamic objects. The recognition unit 102 outputs information showing the recognized environment to the planning unit 104.
 計測部103は、通信ネットワークNWにおける通信の遅延時間を計測する。例えば、計測部103は、ダミー信号を送信してACK(Acknowladge)信号が返信されるまでの時間や、PIN(Personal Identification Number)を送信することにより遅延時間を計測するものであってもよい。計測部103は、計測した遅延時間を計画部104に出力する。 The measurement unit 103 measures the delay time of communication in the communication network NW. For example, the measurement unit 103 may measure the delay time by transmitting a dummy signal and then the time until an ACK (Acknowledgement) signal is returned, or by transmitting a PIN (Personal Identification Number). The measurement unit 103 outputs the measured delay time to the planning unit 104.
 計画部104は、計測部103が計測した遅延時間と、入力部101により入力された作業目標および制約条件と、認識部102が認識した環境を示す情報とに基づいて、ロボット203の動作の流れを示す動作計画を生成する。例えば、入力部101により作業目標および制約条件が入力されると、計画部104は、認識部102が認識した環境を示す情報(画像)を取得する。具体的には、例えば、計画部104は、作業目標が示す対象物Mの移動元の画像を撮影装置201から取得する。計画部104は、撮影装置201から取得する画像により、環境(例えば、対象物Mの移動元における状態(すなわち、位置および姿勢))を認識することができる。そして、計画部104は、例えば、対象物Mの移動元における状態から対象物Mの目的地における状態までの対象物Mの状態を含む移動経路(動作計画の一部)を、例えば、シミュレーションにより生成する。該移動経路を表す情報は、制御部105がロボット203を制御する制御信号を生成するのに必要な情報である。そして、計画部104は、ロボット203の途中のタイムステップごとの各状態(たとえば、対象物Mの種類(形状を含む)、ロボット203の位置および姿勢、ロボット203の動作(対象物Mの把持の強さなど)など)を示す情報(すなわち、シーケンス(動作計画の一部))を、例えば、シミュレーションにより生成する。計画部104は、生成したシーケンスを制御部105に出力する。なお、計画部104は、時相論理、強化学習、最適化技術などの人工知能(Artificial Intelligence;AI)の技術を用いて実現されるものであってよい。 The planning unit 104 generates an operation plan indicating the flow of the robot 203's operation based on the delay time measured by the measurement unit 103, the work goal and constraint conditions input by the input unit 101, and information indicating the environment recognized by the recognition unit 102. For example, when the work goal and constraint conditions are input by the input unit 101, the planning unit 104 acquires information (images) indicating the environment recognized by the recognition unit 102. Specifically, for example, the planning unit 104 acquires an image of the source of the object M indicated by the work goal from the imaging device 201. The planning unit 104 can recognize the environment (for example, the state (i.e., position and posture) of the object M at the source) from the image acquired from the imaging device 201. Then, the planning unit 104 generates, for example, a movement path (part of the operation plan) including the state of the object M from the state of the object M at the source of the movement to the state of the object M at the destination, for example, by simulation. The information representing the movement path is information necessary for the control unit 105 to generate a control signal for controlling the robot 203. The planning unit 104 then generates information (i.e., a sequence (part of the motion plan)) indicating each state of the robot 203 at each time step during the movement (for example, the type (including the shape) of the object M, the position and posture of the robot 203, the motion of the robot 203 (the grip strength of the object M, etc.), etc.) by, for example, simulation. The planning unit 104 outputs the generated sequence to the control unit 105. The planning unit 104 may be realized using artificial intelligence (AI) technology such as temporal logic, reinforcement learning, and optimization technology.
 図2は、本開示の一実施形態のより計画部104が動作計画を生成するタイミングの一例を示す図である。図2の横方向は、時間の推移を表し、右方向に時刻が推移することを表す。図2の(a)の部分における縦軸は、通信ネットワークNWを介して実施される通信の遅延を表す。縦軸の上になるほど、通信遅延が大きいことを表す。また、図2の(b)の部分は、時間の推移に伴い動作計画が行われる期間と、制御が行われる期間とを表す。図2に示すように、計画部104は、動作計画を、その動作計画による制御が完了する度に生成する。計画部104は、新たな計画を生成する場合、例えば、通信における遅延時間に基づいて決定される粒度で動作計画を生成する。通信における遅延時間の例としては、新たな動作計画を生成するタイミングから所定の一定期間遡った期間、新たな動作計画を生成するタイミングから前回動作計画を生成したタイミングまで遡った期間、所定回数前の動作計画の生成タイミングまで遡った期間などの過去の遅延時間が挙げられる。また、通信における遅延時間の例としては、予測された将来の遅延時間も挙げられる。また、遅延時間の例としては、最大値や平均値(移動平均により算出された平均値を含む)などの統計手法を用いた遅延時間、所定のタイミングにおける1度の計測により得られる遅延時間などが挙げられる。なお、計画部104が初回の動作計画を生成する場合、例えば、計測部103が通信ネットワークNWにおける通信の遅延時間を予め計測し、計画部104はその遅延時間を使用してもよい。 2 is a diagram showing an example of the timing at which the planning unit 104 generates an operation plan according to an embodiment of the present disclosure. The horizontal direction in FIG. 2 represents the passage of time, with time moving to the right. The vertical axis in the portion (a) of FIG. 2 represents the delay in communication performed through the communication network NW. The communication delay increases as the vertical axis increases. Also, the portion (b) of FIG. 2 represents the period during which the operation plan is performed and the period during which control is performed as time passes. As shown in FIG. 2, the planning unit 104 generates an operation plan each time control according to the operation plan is completed. When generating a new plan, the planning unit 104 generates an operation plan with a granularity determined based on the delay time in communication, for example. Examples of the delay time in communication include past delay times such as a period going back a predetermined period from the timing at which a new operation plan is generated, a period going back from the timing at which a new operation plan is generated to the timing at which the previous operation plan was generated, and a period going back a predetermined number of times to the timing at which the operation plan was generated. Also, examples of the delay time in communication include predicted future delay times. Examples of delay times include delay times using statistical methods such as maximum values and average values (including average values calculated by moving averages), and delay times obtained by one measurement at a predetermined timing. When the planner 104 generates an initial operation plan, for example, the measurement unit 103 may measure the communication delay time in the communication network NW in advance, and the planner 104 may use that delay time.
 図3は、本開示の一実施形態による計画部104が生成する動作計画の粒度の第1の例を示す図である。また、図4は、本開示の一実施形態による計画部104が生成する動作計画の粒度の第2の例を示す図である。動作計画の粒度とは、各時刻における状態(位置と姿勢)の関係、すなわちタイムステップの細かさを示す時間である。 FIG. 3 is a diagram showing a first example of the granularity of a motion plan generated by the planning unit 104 according to an embodiment of the present disclosure. FIG. 4 is a diagram showing a second example of the granularity of a motion plan generated by the planning unit 104 according to an embodiment of the present disclosure. The granularity of a motion plan is the relationship between the states (position and posture) at each time, i.e., the time that indicates the fineness of the time step.
 図3の(a)の部分および図4の(a)の部分において、横方向は、時間の推移を表し、右方向に時刻が推移することを表す。図3の(a)の部分および図4の(a)の部分における縦軸は、通信ネットワークNWを介して実施される通信の遅延を表す。縦軸の上になるほど、通信遅延が大きいことを表す。なお、図3の(a)の部分は、遅延時間が小さい例を表している。また、図4の(a)の部分は、遅延時間が大きい例を表している。 In part (a) of Figure 3 and part (a) of Figure 4, the horizontal direction represents the passage of time, with time moving to the right. The vertical axis in part (a) of Figure 3 and part (a) of Figure 4 represents the delay in communication carried out via the communication network NW. The higher up on the vertical axis, the greater the communication delay. Note that part (a) of Figure 3 represents an example where the delay time is small. Also, part (a) of Figure 4 represents an example where the delay time is large.
 また、図3の(b)の部分および図4の(b)の部分は、動作計画で規定される所定のタイミングごとの三次元空間における対象物Mの状態のイメージを表している。図3の(b)の部分および図4の(b)の部分における丸印は、動作計画で規定される所定の時刻t(すなわち、t1、t2、t3、・・・、t12、・・・)ごとの対象物Mの状態を表している。図4の(b)の部分に示す丸印の間隔は、図3の(b)の部分に示す丸印の間隔よりも広い。そのため、図4の(b)の部分に示す対象物Mの単位時間当たりの移動距離は、図3の(b)の部分に示す対象物Mの単位時間当たりの移動距離よりも長いことを表す。 Furthermore, part (b) of FIG. 3 and part (b) of FIG. 4 show an image of the state of the object M in three-dimensional space at each predetermined timing defined in the motion plan. The circles in part (b) of FIG. 3 and part (b) of FIG. 4 show the state of the object M at each predetermined time t (i.e., t1, t2, t3, ..., t12, ...) defined in the motion plan. The spacing between the circles shown in part (b) of FIG. 4 is wider than the spacing between the circles shown in part (b) of FIG. 3. Therefore, this shows that the moving distance per unit time of the object M shown in part (b) of FIG. 4 is longer than the moving distance per unit time of the object M shown in part (b) of FIG. 3.
 また、図3の(c)の部分および図4の(c)の部分は、動作計画で規定される隣り合う所定のタイミング間に行われる制御における制御量のイメージを表している。制御量とは、制御を施すタイミング(制御指令を受けるタイミング)の間隔(以下、「制御間隔」と表す)内にロボット203の対象物Mを把持する部分が姿勢を含めて移動しなければならない量であり、それぞれの制御間隔内における対象物Mの状態の変化量である。例えば、図4の(b)の部分に示すように、対象物Mの単位時間当たりの移動距離が長い場合、それぞれの制御間隔内における対象物Mの移動距離を長くする(すなわち、対象物Mの状態の変化量を大きくする)。また、例えば、図3の(b)の部分に示すように、対象物Mの単位時間当たりの移動距離が短い場合、それぞれの制御間隔内における対象物Mの移動距離は短くなる(すなわち、対象物Mの状態の変化量は小さくなる。)つまり、図4の(c)の部分に示す制御量は、図3の(c)の部分に示す制御量よりも大きくなる。 Also, part (c) of FIG. 3 and part (c) of FIG. 4 show an image of the control amount in the control performed between adjacent predetermined timings defined in the operation plan. The control amount is the amount that the part of the robot 203 that grasps the object M must move, including its posture, within the interval (hereinafter referred to as the "control interval") between the timings of performing the control (the timings of receiving the control command), and is the amount of change in the state of the object M within each control interval. For example, as shown in part (b) of FIG. 4, if the moving distance of the object M per unit time is long, the moving distance of the object M within each control interval is made long (i.e., the amount of change in the state of the object M is made large). Also, for example, as shown in part (b) of FIG. 3, if the moving distance of the object M per unit time is short, the moving distance of the object M within each control interval is made short (i.e., the amount of change in the state of the object M is made small). In other words, the control amount shown in part (c) of FIG. 4 is greater than the control amount shown in part (c) of FIG. 3.
 ここで、例として、計画部104は、各タイミングにおける各状態へ遷移しながら状態Aから状態Bまで対象物Mを移動させる動作計画を生成する場合を考える。図3の(b)の部分は、計測部103が計測した遅延時間が小さい場合の動作計画の粒度を示している。また、図4の(b)の部分は、計測部103が計測した遅延時間が大きい場合の動作計画の粒度を示している。通信遅延の大小は、通信遅延の時刻推移を複数比較することによって決められていてもよいし、通信が遅延しているという判定する基準を通信遅延が満たしているか否かを判定することによって決められてもよい。通信遅延は、ある時間における遅延時間の合計値であってもよいし、あるタイミングにおける遅延時間であってもよい。 Here, as an example, consider a case where the planning unit 104 generates an operation plan for moving the object M from state A to state B while transitioning to each state at each timing. Part (b) of FIG. 3 shows the granularity of the operation plan when the delay time measured by the measurement unit 103 is small. Part (b) of FIG. 4 shows the granularity of the operation plan when the delay time measured by the measurement unit 103 is large. The magnitude of the communication delay may be determined by comparing multiple time transitions of the communication delay, or may be determined by determining whether the communication delay satisfies a criterion for determining that communication is delayed. The communication delay may be the total value of the delay time at a certain time, or may be the delay time at a certain timing.
 図3および図4からわかるように、計画部104は、通信ネットワークNWにおける遅延時間に応じた粒度の動作計画を生成する。具体的には、計画部104は、通信ネットワークNWにおける遅延時間が大きくなるにつれて粒度の大きい動作計画を生成する。より具体的には、計画部104は、動作計画の粒度を例えば遅延時間の5倍の時間として、動作計画を生成する。なお、図3の(c)の部分および図4の(c)の部分について説明したように、動作計画の粒度と、制御量とは連動する。すなわち、動作計画の粒度が大きくなるにつれて、制御量は大きくなる。この制御量の情報は、動作計画に含まれる。つまり、計画部104は、まず、遅延時間に応じて動作計画の粒度を決定する。次に、計画部104は、粒度に対応する時刻ごとのロボットの状態を決定する。そして、計画部104は、粒度よりも小さい制御量(すなわち、短い時間間隔)で制御対象装置20を制御する動作計画を生成する。このような制御量を後述する制御部105が反映させて制御信号を生成することにより、制御装置10は、制御対象装置20に対してリアクティブな制御を行うことが可能となる。リアクティブな制御とは、短い時間間隔で逐次的に行う制御のことである。このリアクティブな制御により、突発的に移動し接近した障害物などとの衝突を回避することができる。 3 and 4, the planning unit 104 generates an operation plan with a granularity according to the delay time in the communication network NW. Specifically, the planning unit 104 generates an operation plan with a larger granularity as the delay time in the communication network NW increases. More specifically, the planning unit 104 generates an operation plan with the granularity of the operation plan set to, for example, five times the delay time. As described in the part (c) of FIG. 3 and the part (c) of FIG. 4, the granularity of the operation plan and the control amount are linked. That is, as the granularity of the operation plan increases, the control amount increases. Information on this control amount is included in the operation plan. That is, the planning unit 104 first determines the granularity of the operation plan according to the delay time. Next, the planning unit 104 determines the state of the robot for each time corresponding to the granularity. Then, the planning unit 104 generates an operation plan for controlling the controlled device 20 with a control amount smaller than the granularity (i.e., a short time interval). The control unit 105, which will be described later, reflects such control amounts to generate a control signal, enabling the control device 10 to perform reactive control of the controlled device 20. Reactive control refers to control that is performed sequentially at short time intervals. This reactive control makes it possible to avoid collisions with obstacles that move suddenly and approach.
 図5は、本開示の一実施形態による計画部104が生成する動作計画のシーケンスTBL1の一例を示す図である。例えば、計画部104が生成する動作計画のシーケンスTBL1は、図5に示すように、対象物Mの移動元から目的地までがnのタイムステップごとの後述するロボット203の各状態を示すシーケンスである。 FIG. 5 is a diagram showing an example of a sequence TBL1 of an operation plan generated by the planning unit 104 according to an embodiment of the present disclosure. For example, as shown in FIG. 5, the sequence TBL1 of an operation plan generated by the planning unit 104 is a sequence showing each state of the robot 203 (described later) for each n time steps from the origin of movement of the object M to the destination.
 制御部105は、制御対象装置20における制御対象(ここで示す例ではロボット203)を制御するための制御信号を生成する。例えば、制御部105は、対象物Mを目的地まで移動させる場合に、対象物Mを認識する位置(以下、「認識位置」と表す)まで対象物Mを移動させ、認識位置から目的地まで対象物Mを移動させる制御信号を生成する。 The control unit 105 generates a control signal for controlling a control target (in this example, a robot 203) in the controlled device 20. For example, when moving an object M to a destination, the control unit 105 moves the object M to a position where the object M is recognized (hereinafter referred to as the "recognition position"), and generates a control signal for moving the object M from the recognition position to the destination.
 具体的には、制御部105は、計画部104が出力するシーケンスおよび後述するコントローラ202から送信されるロボット203の各関節角の情報に基づいて、ロボット203を制御する制御信号を生成する。制御信号には、ロボット203の各関節角を制御する信号が含まれる。なお、シーケンスには、制御量が含まれている。そのため、制御部105は、制御量を反映させた制御信号を生成することになる。つまり、制御部105は、現在の関節角から目的の姿勢に近づけるように関節角をタイムステップよりも短い時間間隔で調整する制御信号を生成する。 Specifically, the control unit 105 generates a control signal for controlling the robot 203 based on the sequence output by the planning unit 104 and information on each joint angle of the robot 203 transmitted from the controller 202 described below. The control signal includes a signal for controlling each joint angle of the robot 203. Note that the sequence includes a control amount. Therefore, the control unit 105 generates a control signal that reflects the control amount. In other words, the control unit 105 generates a control signal for adjusting the joint angle at a time interval shorter than the time step so as to approach the target posture from the current joint angle.
 なお、制御部105は、制御信号を生成する際に評価関数を最適化する制御信号を生成するものであってもよい。評価関数の例としては、対象物Mを移動させる際にロボット203が消費するエネルギー量を表す関数、対象物Mを移動させる経路に沿った距離を表す関数などが挙げられる。制御部105は、生成した制御信号を制御対象装置20に送信する。 The control unit 105 may generate a control signal that optimizes an evaluation function when generating the control signal. Examples of the evaluation function include a function that represents the amount of energy consumed by the robot 203 when moving the object M, and a function that represents the distance along the path along which the object M is moved. The control unit 105 transmits the generated control signal to the controlled device 20.
 図6は、本開示の一実施形態による制御部105が生成する制御信号Cntの一例を示す図である。例えば、制御部105が生成する初期計画の制御信号Cntは、図6に示すように、例えば、対象物Mの移動元から目的地までがタイムステップごとの各制御信号および制御量である。 FIG. 6 is a diagram showing an example of a control signal Cnt generated by the control unit 105 according to an embodiment of the present disclosure. For example, the control signal Cnt of the initial plan generated by the control unit 105 is each control signal and control amount for each time step from the origin of movement of the object M to the destination, as shown in FIG. 6.
 記憶部106は、制御装置10が行う処理に必要な種々の情報を記憶する。例えば、記憶部106は、制約条件を示すファイル、シーケンスTBL1、タイムステップごとの制御信号Cntおよび制御量などを記憶する。 The storage unit 106 stores various information necessary for the processing performed by the control device 10. For example, the storage unit 106 stores a file indicating constraint conditions, a sequence TBL1, a control signal Cnt for each time step, and a control amount.
 制御対象装置20は、図1に示すように、撮影装置201、1つ以上の(制御部105に対応する)コントローラ202、及び、コントローラ202によって制御されるロボット203を備える。コントローラ202は、複数のロボット203を制御してもよい。制御対象装置20は、制御装置10の制御対象となるロボット203を含む装置である。 As shown in FIG. 1, the controlled device 20 includes an image capture device 201, one or more controllers 202 (corresponding to the control unit 105), and a robot 203 controlled by the controller 202. The controller 202 may control multiple robots 203. The controlled device 20 is a device that includes a robot 203 that is to be controlled by the control device 10.
 撮影装置201は、対象物Mの状態を含むロボット203の周辺の環境を撮影する。撮影装置201は、例えば、産業用カメラであり、対象物Mの状態(すなわち、位置および姿勢)を特定することができる。撮影装置201が撮影した画像は、タイムスタンプなどの時刻情報とともに制御装置10に送信される。 The imaging device 201 captures images of the environment around the robot 203, including the state of the object M. The imaging device 201 is, for example, an industrial camera, and is capable of identifying the state (i.e., position and posture) of the object M. The image captured by the imaging device 201 is transmitted to the control device 10 together with time information such as a timestamp.
 コントローラ202は、制御装置10から送信される制御信号に応じてロボットを制御する。また、コントローラ202は、制御に応じて動作したロボット203の各関節角の情報を取得する。そして、コントローラ202は、取得した各関節角の情報を、タイムスタンプなどの時刻情報とともに制御装置10に送信する。 The controller 202 controls the robot in response to control signals sent from the control device 10. The controller 202 also acquires information on each joint angle of the robot 203 that has operated in response to the control. The controller 202 then transmits the acquired information on each joint angle to the control device 10 together with time information such as a timestamp.
 ロボット203は、対応するコントローラ202による制御に従い動作する。例えば、制御装置10から制御対象装置20に送信される制御信号が状態Aから状態Bまで対象物Mを移動させる制御信号である場合、コントローラ202による制御の下、ロボット203は、対象物Mを把持してその対象物Mを状態Aから状態Bまで移動する動作を実施する。 The robot 203 operates according to the control of the corresponding controller 202. For example, if the control signal transmitted from the control device 10 to the controlled device 20 is a control signal for moving the object M from state A to state B, the robot 203, under the control of the controller 202, performs an operation of grasping the object M and moving the object M from state A to state B.
 なお、制御対象装置20がタイムスタンプなどの時刻情報を送信することにより、制御装置10は、時刻情報とともに送信された情報がいつの情報であるかを知りことが可能となり、生成する制御信号の精度を一定以上に保つことができる。 In addition, by having the controlled device 20 transmit time information such as a timestamp, the control device 10 can know when the information transmitted along with the time information was transmitted, and can maintain a certain level of accuracy in the control signal it generates.
(制御システムが行う処理)
 図7は、本開示の一実施形態による制御システム1の処理フローの一例を示す図である。次に、図7を参照して制御システム1において制御装置10が行う処理の詳細について説明する。なお、制御装置10は、制御対象装置20から撮影装置201が撮影した環境を示す情報を受信しているものとする。
(Processing performed by the control system)
Fig. 7 is a diagram showing an example of a processing flow of the control system 1 according to an embodiment of the present disclosure. Next, details of the processing performed by the control device 10 in the control system 1 will be described with reference to Fig. 7. It is assumed that the control device 10 receives information indicating the environment photographed by the imaging device 201 from the control target device 20.
 入力部101は、作業目標および制約条件を計画部104に入力する(ステップS1)。認識部102は、撮影装置201が撮影した環境を示す画像を制御対象装置20から取得する。認識部102は、撮影装置201から取得する画像が示す環境を認識する(ステップS2)。認識部102は、認識した環境を示す情報を計画部104に出力する。計測部103は、通信ネットワークNWにおける通信の遅延時間を計測する(ステップS3)。計測部103は、計測した遅延時間を計画部104に出力する。 The input unit 101 inputs the work goal and constraint conditions to the planning unit 104 (step S1). The recognition unit 102 acquires an image showing the environment captured by the imaging device 201 from the controlled device 20. The recognition unit 102 recognizes the environment shown in the image acquired from the imaging device 201 (step S2). The recognition unit 102 outputs information showing the recognized environment to the planning unit 104. The measurement unit 103 measures the communication delay time in the communication network NW (step S3). The measurement unit 103 outputs the measured delay time to the planning unit 104.
 計画部104は、計測部103が計測した遅延時間と、入力部101により入力された作業目標および制約条件と、認識部102が認識した環境を示す情報とに基づいて、ロボット203の動作の流れを示す動作計画を生成する。例えば、計画部104は、通信ネットワークNWにおける遅延時間に応じた粒度の動作計画を生成する(ステップS4)。具体的には、計画部104は、通信ネットワークNWにおける遅延時間が大きくなるにつれて粒度の大きい動作計画を生成する。より具体的には、計画部104は、例えば、動作計画の粒度を遅延時間の5倍の時間として、動作計画を生成する。計画部104は、生成したシーケンスを制御部105に出力する。 The planning unit 104 generates an operation plan indicating the flow of operation of the robot 203 based on the delay time measured by the measurement unit 103, the task goal and constraint conditions input by the input unit 101, and information indicating the environment recognized by the recognition unit 102. For example, the planning unit 104 generates an operation plan with a granularity according to the delay time in the communication network NW (step S4). Specifically, the planning unit 104 generates an operation plan with a larger granularity as the delay time in the communication network NW increases. More specifically, the planning unit 104 generates an operation plan with a granularity of, for example, five times the delay time. The planning unit 104 outputs the generated sequence to the control unit 105.
 制御部105は、動作計画に基づいて、制御対象装置20における制御対象であるロボット203を制御するための制御信号を生成する(ステップS5)。例えば、制御部105は、対象物Mを目的地まで移動させる場合に、対象物Mを認識する認識位置まで対象物Mを移動させ、認識位置から目的地まで対象物Mを移動させる制御信号を生成する。 The control unit 105 generates a control signal for controlling the robot 203, which is the control target in the controlled device 20, based on the operation plan (step S5). For example, when moving the object M to a destination, the control unit 105 generates a control signal for moving the object M to a recognition position where the object M is recognized, and for moving the object M from the recognition position to the destination.
 具体的には、制御部105は、計画部104が出力するシーケンスおよびコントローラ202から送信されるロボット203の各関節角の情報に基づいて、ロボット203をタイムステップよりも短い時間間隔で制御する制御信号を生成する。制御部105は、生成した制御信号を制御対象装置20に送信する(ステップS6)。 Specifically, the control unit 105 generates a control signal for controlling the robot 203 at a time interval shorter than the time step, based on the sequence output by the planning unit 104 and information on each joint angle of the robot 203 transmitted from the controller 202. The control unit 105 transmits the generated control signal to the controlled device 20 (step S6).
(利点)
 以上、本開示の一実施形態による制御システム1について説明した。制御システム1において、制御装置10は、ロボット203(制御対象の一例)の動作計画に基づいて生成されたロボット203を制御する制御信号を送信する通信ネットワークNWにおける通信の遅延時間を計測する計測部103(状態を監視する監視手段の一例)と、遅延時間に基づいて、ロボット203の制御量および前記動作計画の粒度を決定する計画部104(決定手段の一例)と、を備える。
(advantage)
The above describes the control system 1 according to an embodiment of the present disclosure. In the control system 1, the control device 10 includes a measurement unit 103 (an example of a monitoring means for monitoring a state) that measures a communication delay time in a communication network NW that transmits a control signal for controlling a robot 203 (an example of a control target) that is generated based on an operation plan of the robot 203, and a planning unit 104 (an example of a determining means) that determines a control amount of the robot 203 and a granularity of the operation plan based on the delay time.
 こうすることにより、制御システム1は、動作計画を考慮して動作を決定する制御において、通信における遅延を適切に反映させることができる。 In this way, the control system 1 can appropriately reflect communication delays in the control that determines the operation taking into account the operation plan.
 上述の本開示の一実施形態では、計測部103は、通信ネットワークNWにおける通信の遅延時間を計測するものとして説明した。しかしながら、本開示の一実施形態では、計測部103が通信ネットワークNWにおける通信のデータ量を計測し、計測したデータ量から対応する遅延時間を特定し、特定した遅延時間を、本開示の一実施形態における遅延時間の代わりに用いるものであってもよい。 In the embodiment of the present disclosure described above, the measurement unit 103 has been described as measuring the delay time of communication in the communication network NW. However, in an embodiment of the present disclosure, the measurement unit 103 may measure the amount of data of communication in the communication network NW, identify the corresponding delay time from the measured amount of data, and use the identified delay time in place of the delay time in an embodiment of the present disclosure.
 次に、本開示の実施形態による最小構成の制御装置10について説明する。図8は、本開示の実施形態による最小構成の制御装置10の一例を示す図である。最小構成の制御装置10は、図8に示すように、計測部103(監視手段の一例)、および計画部104(決定手段の一例)を備える。計測部103は、ロボット203(制御対象の一例)の動作計画に基づいて生成されたロボット203を制御する制御信号を送信する通信ネットワークNWにおける通信の遅延時間を計測する(状態を監視する一例)。計測部103は、例えば、図1に例示されている計測部103が有する機能を用いて実現することができる。計画部104は、遅延時間に基づいて、ロボット203の制御量および前記動作計画の粒度を決定する。計画部104は、例えば、図1に例示されている計画部104が有する機能を用いて実現することができる。 Next, a minimum configuration of the control device 10 according to an embodiment of the present disclosure will be described. FIG. 8 is a diagram showing an example of a minimum configuration of the control device 10 according to an embodiment of the present disclosure. As shown in FIG. 8, the minimum configuration of the control device 10 includes a measurement unit 103 (an example of a monitoring means) and a planning unit 104 (an example of a determination means). The measurement unit 103 measures the communication delay time in the communication network NW that transmits a control signal for controlling the robot 203 (an example of a control target) that is generated based on the operation plan of the robot 203 (an example of a control target) (an example of monitoring the state). The measurement unit 103 can be realized, for example, by using the function of the measurement unit 103 illustrated in FIG. 1. The planner 104 determines the control amount of the robot 203 and the granularity of the operation plan based on the delay time. The planner 104 can be realized, for example, by using the function of the planner 104 illustrated in FIG. 1.
 次に、最小構成の制御装置10の処理について説明する。図9は、本開示の実施形態による最小構成の制御装置10の処理フローの一例を示す図である。ここでは、図9を参照して最小構成の制御装置10の処理について説明する。 Next, the processing of the control device 10 with the minimum configuration will be described. FIG. 9 is a diagram showing an example of a processing flow of the control device 10 with the minimum configuration according to an embodiment of the present disclosure. Here, the processing of the control device 10 with the minimum configuration will be described with reference to FIG. 9.
 計測部103は、ロボット203の動作計画に基づいて生成されたロボット203を制御する制御信号を送信する通信ネットワークNWにおける通信の遅延時間を計測する(ステップS101)。計画部104は、遅延時間に基づいて、ロボット203の制御量および前記動作計画の粒度を決定する(ステップS102)。 The measurement unit 103 measures the communication delay time in the communication network NW that transmits the control signal for controlling the robot 203, which is generated based on the motion plan of the robot 203 (step S101). The planning unit 104 determines the control amount of the robot 203 and the granularity of the motion plan based on the delay time (step S102).
 以上、本開示の実施形態による最小構成の制御装置10について説明した。この制御装置10により、動作計画を考慮して動作を決定する制御において、通信における遅延を適切に反映させることができる。 The above describes a control device 10 with a minimum configuration according to an embodiment of the present disclosure. This control device 10 can appropriately reflect communication delays in control that determines operations by taking into account an operation plan.
 なお、本開示の各実施形態における処理は、適切な処理が行われる範囲において、処理の順番が入れ替わってもよい。 The order of the processes in each embodiment of the present disclosure may be changed as long as appropriate processing is performed.
 本開示の実施形態について説明したが、上述の制御システム1、制御装置10、制御対象装置20、その他の制御装置は内部に、コンピュータ装置を有していてもよい。そして、上述した処理の過程は、プログラムの形式でコンピュータ読み取り可能な記録媒体に記憶されており、このプログラムをコンピュータが読み出して実行することによって、上記処理が行われる。コンピュータの具体例を以下に示す。 Although the embodiments of the present disclosure have been described above, the above-mentioned control system 1, control device 10, controlled device 20, and other control devices may have a computer device inside. The above-mentioned process steps are stored in the form of a program on a computer-readable recording medium, and the above-mentioned processes are performed by the computer reading and executing this program. Specific examples of computers are shown below.
 図10は、少なくとも1つの実施形態に係るコンピュータの構成を示す概略ブロック図である。コンピュータ5は、図10に示すように、CPU(Central Processing Unit)6、メインメモリ7、ストレージ8、インターフェース9を備える。例えば、上述の制御システム1、制御装置10、制御対象装置20、その他の制御装置のそれぞれは、コンピュータ5に実装される。そして、上述した各処理部の動作は、プログラムの形式でストレージ8に記憶されている。CPU6は、プログラムをストレージ8から読み出してメインメモリ7に展開し、当該プログラムに従って上記処理を実行する。また、CPU6は、プログラムに従って、上述した各記憶部に対応する記憶領域をメインメモリ7に確保する。 FIG. 10 is a schematic block diagram showing the configuration of a computer according to at least one embodiment. As shown in FIG. 10, the computer 5 includes a CPU (Central Processing Unit) 6, a main memory 7, a storage 8, and an interface 9. For example, the above-mentioned control system 1, control device 10, controlled device 20, and other control devices are each implemented in the computer 5. The operation of each of the above-mentioned processing units is stored in the storage 8 in the form of a program. The CPU 6 reads the program from the storage 8 and expands it in the main memory 7, and executes the above-mentioned processing according to the program. The CPU 6 also secures storage areas in the main memory 7 corresponding to each of the above-mentioned storage units according to the program.
 ストレージ8の例としては、HDD(Hard Disk Drive)、SSD(Solid State Drive)、磁気ディスク、光磁気ディスク、CD-ROM(Compact Disc Read Only Memory)、DVD-ROM(Digital Versatile Disc Read Only Memory)、半導体メモリ等が挙げられる。ストレージ8は、コンピュータ5のバスに直接接続された内部メディアであってもよいし、インターフェース9または通信回線を介してコンピュータ5に接続される外部メディアであってもよい。また、このプログラムが通信回線によってコンピュータ5に配信される場合、配信を受けたコンピュータ5が当該プログラムをメインメモリ7に展開し、上記処理を実行してもよい。少なくとも1つの実施形態において、ストレージ8は、一時的でない有形の記憶媒体である。 Examples of storage 8 include HDD (Hard Disk Drive), SSD (Solid State Drive), magnetic disk, magneto-optical disk, CD-ROM (Compact Disc Read Only Memory), DVD-ROM (Digital Versatile Disc Read Only Memory), semiconductor memory, etc. Storage 8 may be an internal medium directly connected to the bus of computer 5, or an external medium connected to computer 5 via interface 9 or a communication line. In addition, when this program is distributed to computer 5 via a communication line, computer 5 that receives the program may expand the program in main memory 7 and execute the above process. In at least one embodiment, storage 8 is a non-transitory tangible storage medium.
 また、上記プログラムは、前述した機能の一部を実現してもよい。さらに、上記プログラムは、前述した機能をコンピュータ装置にすでに記録されているプログラムとの組み合わせで実現できるファイル、いわゆる差分ファイル(差分プログラム)であってもよい。 The program may also realize some of the functions described above. Furthermore, the program may be a file that can realize the functions described above in combination with a program already recorded in the computer device, a so-called differential file (differential program).
 本開示のいくつかの実施形態を説明したが、これらの実施形態は、例であり、開示の範囲を限定しない。これらの実施形態は、開示の要旨を逸脱しない範囲で、種々の追加、省略、置き換え、変更を行ってよい。 Although several embodiments of the present disclosure have been described, these embodiments are merely examples and do not limit the scope of the disclosure. Various additions, omissions, substitutions, and modifications may be made to these embodiments without departing from the gist of the disclosure.
 なお、上記の実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。 Note that some or all of the above embodiments can be described as follows, but are not limited to the following:
(付記1)
 制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視する監視手段と、
 前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定する決定手段と、
 を備える制御装置。
(Appendix 1)
A monitoring means for monitoring a state of a communication network that transmits a control signal for controlling the control object, the control signal being generated based on an operation plan of the control object;
a determination means for determining a control amount of the control object and a granularity of the operation plan based on the state;
A control device comprising:
(付記2)
 前記決定手段により決定された前記制御量および前記動作計画の粒度に基づいて前記制御信号を生成する生成手段、
 を備える付記1に記載の制御装置。
(Appendix 2)
a generating means for generating the control signal based on the control amount and the granularity of the operation plan determined by the determining means;
2. The control device according to claim 1, comprising:
(付記3)
 前記制御量は、
 前記動作計画の粒度よりも小さい、
 付記1または付記2に記載の制御装置。
(Appendix 3)
The control amount is
smaller than the granularity of the motion plan;
3. The control device according to claim 1 or 2.
(付記4)
 前記制御対象はロボットである、
 付記1から付記3の何れか1つに記載の制御装置。
(Appendix 4)
The control target is a robot.
4. The control device according to claim 1 ,
(付記5)
 付記1から付記4の何れか1つに記載の制御装置と、
 前記制御装置の制御対象と、
 を備える制御システム。
(Appendix 5)
A control device according to any one of claims 1 to 4,
A control target of the control device;
A control system comprising:
(付記6)
 制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視し、
 前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定する、
 制御方法。
(Appendix 6)
monitor a state of a communication network that transmits a control signal for controlling the control object, the control signal being generated based on an operation plan for the control object;
determining a control amount of the control target and a granularity of the operation plan based on the state;
Control methods.
(付記7)
 制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視することと、
 前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定することと、
 をコンピュータに実行させるプログラムが格納されている記録媒体。
(Appendix 7)
monitoring a state of a communication network that transmits a control signal for controlling a control object, the control signal being generated based on an operation plan for the control object;
determining a control amount of the control object and a granularity of the operation plan based on the state;
A recording medium on which a program for causing a computer to execute the above is stored.
 本開示の各態様によれば、動作計画を考慮して動作を決定する制御において、通信における遅延を適切に反映させることができる。 According to each aspect of the present disclosure, delays in communication can be appropriately reflected in control that determines actions by taking into account an action plan.
1・・・制御システム
5・・・コンピュータ
6・・・CPU
7・・・メインメモリ
8・・・ストレージ
9・・・インターフェース
10・・・制御装置
20・・・制御対象装置
101・・・入力部
102・・・認識部
103・・・計測部
104・・・計画部
105・・・制御部
201・・・撮影装置
202・・・コントローラ
203・・・ロボット
1: Control system 5: Computer 6: CPU
7... Main memory 8... Storage 9... Interface 10... Control device 20... Control target device 101... Input unit 102... Recognition unit 103... Measurement unit 104... Planning unit 105... Control unit 201... Imaging device 202... Controller 203... Robot

Claims (7)

  1.  制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視する監視手段と、
     前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定する決定手段と、
     を備える制御装置。
    A monitoring means for monitoring a state of a communication network that transmits a control signal for controlling the control object, the control signal being generated based on an operation plan of the control object;
    a determination means for determining a control amount of the control object and a granularity of the operation plan based on the state;
    A control device comprising:
  2.  前記決定手段により決定された前記制御量および前記動作計画の粒度に基づいて前記制御信号を生成する生成手段、
     を備える請求項1に記載の制御装置。
    a generating means for generating the control signal based on the control amount and the granularity of the operation plan determined by the determining means;
    The control device according to claim 1 .
  3.  前記制御量は、
     前記動作計画の粒度よりも小さい、
     請求項1または請求項2に記載の制御装置。
    The control amount is
    smaller than the granularity of the motion plan;
    The control device according to claim 1 or 2.
  4.  前記制御対象はロボットである、
     請求項1から請求項3の何れか一項に記載の制御装置。
    The control target is a robot.
    The control device according to any one of claims 1 to 3.
  5.  請求項1から請求項4の何れか一項に記載の制御装置と、
     前記制御装置の制御対象と、
     を備える制御システム。
    A control device according to any one of claims 1 to 4;
    A control target of the control device;
    A control system comprising:
  6.  制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視し、
     前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定する、
     制御方法。
    monitor a state of a communication network that transmits a control signal for controlling the control object, the control signal being generated based on an operation plan for the control object;
    determining a control amount of the control target and a granularity of the operation plan based on the state;
    Control methods.
  7.  制御対象の動作計画に基づいて生成された前記制御対象を制御する制御信号を送信する通信ネットワークの状態を監視することと、
     前記状態に基づいて、前記制御対象の制御量および前記動作計画の粒度を決定することと、
     をコンピュータに実行させるプログラムが格納されている記録媒体。
    monitoring a state of a communication network that transmits a control signal for controlling a control object, the control signal being generated based on an operation plan for the control object;
    determining a control amount of the control object and a granularity of the operation plan based on the state;
    A recording medium on which a program for causing a computer to execute the above is stored.
PCT/JP2022/047093 2022-12-21 2022-12-21 Control device, control system, control method, and recording medium WO2024134802A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/047093 WO2024134802A1 (en) 2022-12-21 2022-12-21 Control device, control system, control method, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/047093 WO2024134802A1 (en) 2022-12-21 2022-12-21 Control device, control system, control method, and recording medium

Publications (1)

Publication Number Publication Date
WO2024134802A1 true WO2024134802A1 (en) 2024-06-27

Family

ID=91588179

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/047093 WO2024134802A1 (en) 2022-12-21 2022-12-21 Control device, control system, control method, and recording medium

Country Status (1)

Country Link
WO (1) WO2024134802A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160229050A1 (en) * 2015-02-06 2016-08-11 Abb Technology Ag Contact force limiting with haptic feedback for a tele-operated robot
JP2019003510A (en) * 2017-06-16 2019-01-10 株式会社 日立産業制御ソリューションズ Robot controller, robot control system, and robot control method
JP2019217557A (en) * 2018-06-15 2019-12-26 株式会社東芝 Remote control method and remote control system
JP2020170467A (en) * 2019-04-05 2020-10-15 株式会社Preferred Networks Information processing system, robot, remote control device, information processing method, and program
WO2021024352A1 (en) * 2019-08-05 2021-02-11 三菱電機株式会社 Control device and control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160229050A1 (en) * 2015-02-06 2016-08-11 Abb Technology Ag Contact force limiting with haptic feedback for a tele-operated robot
JP2019003510A (en) * 2017-06-16 2019-01-10 株式会社 日立産業制御ソリューションズ Robot controller, robot control system, and robot control method
JP2019217557A (en) * 2018-06-15 2019-12-26 株式会社東芝 Remote control method and remote control system
JP2020170467A (en) * 2019-04-05 2020-10-15 株式会社Preferred Networks Information processing system, robot, remote control device, information processing method, and program
WO2021024352A1 (en) * 2019-08-05 2021-02-11 三菱電機株式会社 Control device and control method

Similar Documents

Publication Publication Date Title
KR102327825B1 (en) Determination and Use of Corrections for Robot Behaviors
JP6684404B1 (en) Robot system for palletizing packages using real-time placement simulation
CN113561171B (en) Robot system with dynamic motion adjustment mechanism and method of operating the same
JPWO2018105599A1 (en) Control device, control method and program
JP6457653B2 (en) Laser scanning apparatus and laser scanning system
Schyja et al. A modular and extensible framework for real and virtual bin-picking environments
US20220063097A1 (en) System for Emulating Remote Control of a Physical Robot
TW202020589A (en) Coach apparatus and cooperative operation controlling method for coach-driven multi-robot cooperative operation system
WO2020017266A1 (en) Simulation device, simulation program, and simulation method
CN111830825B (en) Automatic driving control method and system for machine
WO2024134802A1 (en) Control device, control system, control method, and recording medium
JP5573275B2 (en) Feature point extraction device, motion teaching device and motion processing device using the same
JP2020172011A (en) Display control system, display control method, and program
WO2023188127A1 (en) Robot system, processing method, and recording medium
WO2024134794A1 (en) Processing device, robot system, processing method, and recording medium
WO2023233557A1 (en) Robot system, control method, and recording medium
TW202009814A (en) Method and apparatus for determining traveling path of agent
Abu-Dakka et al. Parallel-populations genetic algorithm for the optimization of cubic polynomial joint trajectories for industrial robots
US20230286156A1 (en) Motion planning and control for robots in shared workspace employing staging poses
US20230419643A1 (en) Machine learning device and machine learning method
WO2023127125A1 (en) Control device, robot system, control method, and recording medium
JP7135322B2 (en) MONITORING OBJECT FOLLOWING CONTROL DEVICE, MONITORING OBJECT FOLLOWING CONTROL PROGRAM
CN114585482B (en) Control device, control method, and robot system
Moreno-Valenzuela Tracking control of on-line time-scaled trajectories for robot manipulators under constrained torques
WO2024018900A1 (en) Device control equipment, control system, and control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22969189

Country of ref document: EP

Kind code of ref document: A1