WO2021029151A1 - Robot control device, method, and program - Google Patents

Robot control device, method, and program Download PDF

Info

Publication number
WO2021029151A1
WO2021029151A1 PCT/JP2020/025742 JP2020025742W WO2021029151A1 WO 2021029151 A1 WO2021029151 A1 WO 2021029151A1 JP 2020025742 W JP2020025742 W JP 2020025742W WO 2021029151 A1 WO2021029151 A1 WO 2021029151A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
target person
control device
action
behavior
Prior art date
Application number
PCT/JP2020/025742
Other languages
French (fr)
Japanese (ja)
Inventor
龍一 鈴木
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2021029151A1 publication Critical patent/WO2021029151A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/06Safety devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • This disclosure relates to robot controls, methods and programs.
  • the present application has been made in view of the above, and an object of the present application is to provide a robot control device, a method, and a program capable of easily grasping the operation of a robot by a target person.
  • the robot control device of one aspect according to the present disclosure includes a detection unit and a determination unit.
  • the detection unit is provided around the robot and detects the behavior of the target person in a grace area farther from the robot than an essential area in which the robot's evacuation action is essential. Based on the behavior detected by the detection unit, the determination unit determines a pre-action according to the evacuation action that the robot executes for the target person in the grace area.
  • the target person can easily grasp the operation of the robot.
  • FIG. (1) which shows the outline of the robot control device which concerns on embodiment.
  • FIG (2) which shows the outline of the robot control device which concerns on embodiment.
  • It is a block diagram which shows the structural example of the robot control device which concerns on embodiment.
  • It is a figure (1) which shows an example of the action with respect to a target person.
  • It is a figure (2) which shows an example of the action with respect to a subject person.
  • It is a flowchart which shows the processing procedure which the robot control apparatus which concerns on embodiment executes.
  • It is a hardware block diagram which shows an example of the computer which realizes the function of a robot control device.
  • This technical idea was conceived by paying attention to the above points, and by determining the action of the robot according to the behavior taken by the target person with respect to the robot, the target person is given the robot's movement. It becomes possible to easily grasp.
  • FIGS. 1A and 1B are diagrams showing an outline of the robot control device according to the embodiment.
  • the robot control device 10 is a control device built in the robot 1 and controlling the robot 1.
  • the robot control device 10 may be provided outside the robot 1 and remotely control the robot 1.
  • the robot 1 is a mobile robot, and in the example shown in FIG. 1, the case where the robot 1 is a robot traveling on wheels is shown.
  • the robot may be a leg type or a flying type moving body. Further, it may be provided with at least one or more arms, or may be a moving body without an arm.
  • the robot control device 10 detects the behavior of the target person T around the robot 1 based on the sensing result of the environment sensing such as the sensor S, and determines the action of the robot 1 with respect to the target person T.
  • an essential area A1 and a grace area A2 farther from the robot 1 than the essential area A1 are provided around the robot 1.
  • the essential area A1 and the grace area A2 are circular areas centered on the robot 1, respectively.
  • the essential area A1 is, for example, an area where the target person T is likely to come into contact with the robot 1.
  • the robot 1 performs an evacuation action to evacuate from the target person T.
  • the evacuation action is an action that moves the robot 1 away from the target person T.
  • the grace area A2 is an area where there is a certain amount of time grace before the target person T and the robot 1 collide.
  • the robot control device 10 determines a pre-action according to the above-mentioned evacuation action of the robot 1 based on the behavior of the target person T in the grace area A2.
  • the pre-action is an action that the robot 1 passively performs with respect to the behavior (behavior) of the target person T, and is for notifying the target person T that the robot 1 recognizes the target person T. Action.
  • the robot 1 when the target person T takes an action indicating that the robot 1 has been recognized in the grace area A2, the robot 1 has the target person T with respect to the target person T from the robot 1 side. It will be done as a preliminary action that it recognizes.
  • the robot control device 10 detects a behavior of avoiding the robot 1 (for example, a change of course) in the grace area A2, the robot control device 10 determines an action corresponding to the behavior as a preliminary action and executes the action for the robot 1. Let me.
  • the target person T behaves to avoid the robot 1, it means that the target person T side recognizes the robot 1. That is, the subject T recognizes the robot 1 and then takes the action of giving way to the robot 1.
  • the robot control device 10 indicates that the robot 1 recognizes the target person T as a preliminary action of the robot 1 in response to the above behavior, so that the robot 1 recognizes the target person T. Can be intuitively grasped by the subject T.
  • the traveling direction of the robot 1 is changed to the opposite side of the course changed by the target person T. That is, the target person T and the robot 1 give way to each other.
  • the robot control device 10 determines the action as if it responds to the behavior of the target person T, and grasps that the robot 1 is aware of the target person T with respect to the target person T with few actions. It becomes possible to make it.
  • the pre-action is an action for evacuating the robot 1 from the target person T like the evacuation action, but has a different meaning from the evacuation action.
  • the target person T does not have to worry about the subsequent actions of the robot 1, so that the burden on the target person T can be reduced.
  • the robot control device 10 sets the grace area A2 in addition to the essential area A1, and determines the pre-action of the robot 1 with respect to the target person T according to the behavior of the target person T in the grace area A2. That is, in the control method according to the embodiment, natural communication in which humans give way to each other is utilized for communication between the target person T in the grace area A2 and the robot 1.
  • the robot control device 10 can make the target person T know in advance that the robot 1 recognizes the target person T in the grace area A2 farther than the essential area A1.
  • the target person T can easily grasp the operation of the robot 1.
  • FIG. 2 is a block diagram showing a configuration example of the robot control device 10 according to the embodiment.
  • the robot control device 10 includes a remote control reception unit 2, an input unit 3, an output unit 4, a drive unit 5, a storage unit 6, and a control unit 7.
  • the remote control reception unit 2 is a communication unit that receives remote control for the robot 1.
  • the input unit 3 inputs the sensing result of the environment sensing around the robot 1 to the control unit 7.
  • the input unit 3 includes a laser ranging device 31, an RGB camera 32, a stereo camera 33, and an inertial measurement unit 34.
  • the laser range finder 31 is a device that measures the distance to an obstacle, and is composed of an infrared range finder, an ultrasonic range finder, LiDAR (Laser Imaging Detection and Ranging), and the like.
  • the RGB camera 32 is an imaging device that captures an image (still image or moving image).
  • the stereo camera 33 is an imaging device that measures the distance to an object by imaging the object from a plurality of directions.
  • the inertial measurement unit 34 is, for example, a device that detects angles of three axes and acceleration.
  • the output unit 4 is provided on the robot 1, for example, and is composed of a display device and a speaker.
  • the output unit 4 outputs an image or sound input from the control unit 7.
  • the drive unit 5 is composed of an actuator, and drives the robot 1 under the control of the control unit 7.
  • the storage unit 6 stores the target person information 61, the model information 62, the parameter information 63, the behavior information 64, and the action information 65.
  • Target person information 61 is information about the target person T.
  • the target person information 61 is information regarding the number of times and the frequency of contact with the robot 1 by the target person T.
  • FIG. 3 is a diagram showing an example of the target person information 61 according to the embodiment.
  • the target person information 61 is information in which a "target person ID”, a “feature amount”, a “contact history”, a “recognition degree”, and the like are associated with each other.
  • the "target person ID” is an identifier that identifies the target person T.
  • the "feature amount” indicates the feature amount of the corresponding subject T.
  • the feature amount is information about the feature amount of the face of the subject T.
  • the "contact history” is information related to the history of the corresponding target person T contacting the robot 1.
  • the contact history here is a history in which the robot 1 recognizes the target person T.
  • information regarding the date and time, frequency, and the like when the robot 1 recognizes the target person T is registered.
  • the recognition level is set according to the number of times of contact with the robot or the frequency of contact with the robot based on the contact history.
  • the recognition level is expressed in three stages, "A” indicates the highest recognition level, and “C” indicates the lowest recognition level.
  • the recognition level “A” indicates that the robot 1 is in constant contact with the robot 1
  • the subject T having the recognition level “C” indicates that the robot T is in contact with the robot for the first time. That is, as the number of times the target person T comes into contact with the robot 1 increases, the recognition level is updated.
  • the model information 62 is information about a model that identifies the physical characteristics of the subject T from the image data.
  • the model information 62 is information about a model for estimating the age of the subject T.
  • Parameter information 63 is information related to various parameters related to the grace area A2. As will be described later, the robot control device 10 can set the grace area A2 according to the target person T, and the parameters for setting the grace area A2 are stored in the storage unit 6 as parameter information 63. A specific example of the parameter information 63 will be described later.
  • the behavior information 64 is information on the behavior of the target person T, and is information on the behavior indicating that the target person T has recognized the robot 1 in the present embodiment.
  • FIG. 4 is a diagram showing an example of the behavior information 64 according to the embodiment.
  • the behavior information 64 is information in which the "detection ID” and the “detection condition” are associated with each other.
  • the “detection ID” is an identifier that identifies each detection condition.
  • the “detection condition” is a detection condition of the behavior performed by the target person T after recognizing the robot 1.
  • changing the course shown in the detection condition of the detection ID "D001” means that the subject T has changed the course and opened a passage for the robot 1. Further, the decrease in speed shown in the detection condition of the detection ID "D002" indicates that the moving speed of the subject T has decreased, and the acceleration after changing the traveling direction gives the passage to the robot 1. After that, it is shown that the subject T has accelerated.
  • the pace shown in the detection condition of the detection ID "D003" becomes faster or slower, it means that the pace of the target person T has changed before and after the recognition of the robot 1.
  • Sending a line of sight to the robot 1 shown in the detection condition of the detection ID "D004" indicates that the subject T has seen the robot 1, and as shown in the detection ID "D004", it is different from the robot 1.
  • the detection condition may be that the line of sight is sent in the direction for a certain period of time or longer.
  • the target person T when the target person T sends a line of sight in a direction different from that of the robot 1 for a certain period of time or longer, the target person T is looking for a route to avoid a collision with the robot 1, or the target person T is a robot. It is assumed that the robot 1 is reluctant to communicate with the robot 1 (for example, is afraid of the robot 1). That is, such a behavior can be grasped as a behavior performed by the target person T after recognizing the robot 1.
  • the change in the orientation of a part of the body shown in the detection condition of the detection ID "D004" means that the orientation of the torso, arms, and knees has changed, or the upper body has changed, although the subject has not changed the course. Shows behavior such as kneeling.
  • the action information 65 is information regarding the action executed by the robot 1 with respect to the behavior of the target person T and the timing of performing the action.
  • the control unit 7 has a function of controlling each configuration included in the robot control device 10. Further, as shown in FIG. 2, the control unit 7 includes an identification unit 71, a setting unit 72, a detection unit 73, and a determination unit 74.
  • the identification unit 71 identifies the target person from the image data captured by, for example, the RGB camera 32. Specifically, the identification unit 71 extracts the feature amount of the face of the target person T from the image data, and compares the extracted feature amount with the feature amount registered in the target person information 61 to obtain the target person. Identify T.
  • the identification unit 71 can also estimate the age of the target person T reflected in the image data based on the model of the model information 62.
  • the setting unit 72 sets the grace area A2 to be set for the target person T based on the identification result of the identification unit 71.
  • the setting unit 72 sets the grace area A2 based on the recognition level of the target person T, the age of the target person T, the width of the passage where the robot 1 and the target person T pass each other, and the like.
  • the robot control device 10 may hold a map regarding the passage width in advance for the information on the passage width, or the robot control device 10 calculates it based on the detection result of the input unit 3. You may decide.
  • the setting unit 72 extracts parameters according to the identification result of the identification unit 71 and the passage width from the parameter information 63, and sets the grace area A2 based on the extracted parameters. Specifically, the setting unit 72 sets the grace area A2 narrower as the recognition level is higher, and widens the grace area A2 as the recognition level is lower.
  • the grace area A2 is narrowed for the target person T who is in daily contact with the robot 1, that is, the target person T who fully understands the operation of the robot 1. As a result, it is possible to refrain from the action of the robot 1 on the target person T, so that the decrease in the work efficiency of the robot 1 is suppressed.
  • the behavior of the target person T who comes into contact with the robot 1 for the first time with respect to the robot 1 differs depending on the target person T, and it is difficult to predict when and what kind of behavior will be taken. Therefore, for the target person T, by expanding the grace area A2, it becomes possible for the target person T to reliably detect the behavior after recognizing the robot 1.
  • the setting unit 72 is + 3 m when the recognition level is "B", + 5 m when the recognition level is "C”, etc., based on the grace area A2 for the target person T whose recognition level is "A".
  • the grace area A2 is expanded as the recognition becomes lower.
  • + 3m and + 5m here indicate the radius which extends the grace area A2.
  • the expanded area will be referred to as an expanded area.
  • the setting unit 72 can adjust the expansion area by multiplying the expansion area set based on the above-mentioned recognition level with variables according to the age, the passage width, and the like. For example, if the subject T is a child, some behavior may be taken at the stage of recognizing the robot 1, and if the subject is an elderly person, it is assumed that the field of view is narrower than that of an adult, and compared to an adult. It is assumed that the robot 1 will be recognized closer.
  • the expansion area is set wider than that of an adult, and when the subject T is an elderly person, the expansion area is set narrower.
  • the setting unit 72 sets "1.0” when the passage width is 1 m to 3 m, "1.3” when the passage width is less than 1 m, and "0" when the passage width is 3 m or more. , 7 ”, and the grace area A2 is set by multiplying the above expansion area.
  • the wider the passage width, the narrower the expansion area, and the narrower the passage width the wider the expansion area. This is because the narrower the passage width, the more limited the field of view of the target person T is, so that the target person T can recognize the robot 1 from a greater distance than when the passage width is wide.
  • the setting unit 72 has a faster moving speed than when the target person T is walking, such as when the target person T is manipulating a vehicle such as a car or a motorcycle or when the target person T is running.
  • the grace area A2 may be set in consideration of the moving speed.
  • the moving speed is obtained by multiplying the above expansion area by the value obtained by dividing the average value of the current moving speed of the subject T (including the vehicle) by the general average speed of the pedestrian.
  • the corresponding grace area A2 can be set. That is, the faster the moving speed, the wider the grace area A2 is set.
  • the above-mentioned recognition may be adjusted to the lowest target person T, and when there are a plurality of target persons T, unexpected behavior may be taken. Since there is a high possibility, the grace area A2 may be set after setting the lowest recognition level.
  • the detection unit 73 detects the behavior of the target person T in the grace area A2. First, the detection unit 73 determines whether or not the target person T has entered the grace area A2 set by the setting unit 72.
  • the detection unit 73 can identify the target person T from the image data captured by the RGB camera 32, and the target person T is in the grace area A2 based on the detection result of the laser ranging device 31 or the stereo camera 33. It is possible to determine whether or not the user has entered.
  • the detection unit 73 detects the behavior of the target person T when the target person T enters the grace area A2. Specifically, the detection unit 73 detects the behavior of the target person T that matches the detection condition of the behavior information 64.
  • the detection unit 73 notifies the determination unit 74 when it detects a behavior that matches the above detection conditions. On the other hand, when the target person T enters the essential area A1 without taking the behavior that matches the detection condition, the detection unit 73 requests the decision unit 74 for the evacuation action described later.
  • the determination unit 74 determines the pre-action to be executed by the robot 1 with respect to the target person T in the grace area A2 based on the behavior of the target person T detected by the detection unit 73. Specifically, the determination unit 74 changes the planned movement route as an example of the pre-action.
  • the planned movement route is a route scheduled to be moved by the robot 1, and can be acquired from the remote control reception unit 2 or determined by the control unit 7.
  • 5A and 5B are diagrams showing an example of the action according to the embodiment.
  • a case where the subject T takes a behavior of changing the traveling direction D2 will be described as an example.
  • FIG. 5A for example, when the robot 1 moves on the planned movement path D1 and the target person T moves along the traveling direction D2, the robot 1 and the target person T approach each other, and in some cases, the target person T approaches. There is a risk of collision.
  • the robot control device 10 changes.
  • the behavior that changes the traveling direction D2 is detected, and the action for the behavior is determined.
  • the robot 1 since the target person T has changed the traveling direction D2, the robot 1 does not collide with the target person T even if the robot 1 travels on the original planned movement path D1. ..
  • the determination unit 74 changes the planned movement route D1 of the robot 1 and moves the robot 1 along the changed planned movement route D1. This makes it possible for the target person T to know that the robot 1 recognizes the target person T.
  • the planned movement route D1 is changed in the direction opposite to the direction in which the target person T avoids the robot 1.
  • the robot 1 since the subject T changes the traveling direction D2 to the right in the traveling direction, the robot 1 also changes the planned movement path D1 to the right in the traveling direction.
  • the target person T determines the pre-action in response to the action of opening the passage for the robot 1, and causes the robot 1 to execute it.
  • the target person T can intuitively recognize the action of the robot 1.
  • the determination unit 74 may determine the size of the action based on the recognition level of the target person T. That is, the determination unit 74 reduces the action as the recognition of the subject T is higher, and increases the action as the recognition is lower.
  • the determination unit 74 minimizes the change of the planned movement route D1 when the recognition degree is "A” based on the case where the recognition degree of the target person T is "B", and the determination unit 74 of the target person T.
  • the recognition degree is "C”
  • the planned movement route D1 is significantly changed.
  • the route to be largely detoured is determined to be the planned movement route D1
  • the route to be detoured small is determined to be the planned movement route D1.
  • the determination unit 74 may, for example, decelerate or stop the robot 1 as a preliminary action to ensure the safety of the target person T.
  • the behavior includes that the pace of the subject T becomes faster, the subject T sends a line of sight to the robot 1 for a predetermined time or longer, and the like.
  • the essential area A1 may be eliminated or reduced so that the evacuation action is not performed on the target person T.
  • the essential area A1 may be made smaller. This is because the target person T changes the traveling direction D2 to avoid a collision with the robot 1, but if the essential area A1 is sufficiently large, the target person T may pass through the essential area A1. is there.
  • the essential area A1 has an elliptical shape with the traveling direction of the robot 1 as the major axis, and the minor axis is shorter than the radius of the original essential area A1. That is, the changed essential area A1 has a shape that makes it difficult for the target person T to pass through the essential area A1 even if the target person T passes by the side of the robot 1.
  • the essential area A1 for the target person T may be eliminated.
  • the determination unit 74 causes the robot 1 to execute an evacuation action to evacuate from the target person T. Specifically, as shown in the middle of FIG. 5B, for example, when the target person T enters the essential area A1 without changing the course, the determination unit 74 evacuates the collision with the target person T.
  • the planned movement route D1 is calculated.
  • the determination unit 74 moves the robot 1 along the changed planned movement path D1 and executes the action as an evacuation action so as to stand by. As a result, it is possible to evacuate from the target person T.
  • the action to the target person T in the grace area A2 notifies that the robot 1 recognizes the target person T, whereas the action to the target person T in the essential area A1 is the target. This is for retracting the robot 1 from the person T.
  • the determination unit 74 When the target person T enters the essential area A1, the determination unit 74 outputs a warning image and a warning sound from the output unit 4 to inform the target person T of the existence of the robot 1. You may.
  • FIG. 6 is a flowchart showing a processing procedure executed by the robot control device 10 according to the embodiment.
  • step S101 when the robot control device 10 first acquires the sensing result of the input unit 3 (step S101), the robot control device 10 determines whether or not there is a target person T who may come into contact with the robot 1. (Step S102).
  • step S102 determines that the target person T exists in the determination in step S102 (steps S102, Yes).
  • the robot control device 10 identifies the target person T (step S103).
  • the robot control device 10 sets the grace area A2 based on the recognition level of the identified target person T for the robot 1 (step S104), and adjusts the expansion area based on the age of the target person T and the like. (Step S105).
  • the robot control device 10 determines whether or not the behavior satisfying the above detection condition is detected in the grace area A2 (step S106), and when the behavior satisfying the detection condition is detected (step S106), the target person.
  • the pre-action for T is determined (step S107).
  • the pre-action here includes, for example, changing the planned movement route D1.
  • the robot control device 10 executes the original task (step S109) after executing the pre-action determined in step S107 (step S108), and ends the process.
  • the robot control device 10 determines whether or not the target person T has entered the essential area A1 (step). S110).
  • step S110 When the target person T enters the essential area A1 (step S110, Yes), the robot control device 10 executes the evacuation action (step S111), and proceeds to the process of step S109.
  • step S102 the robot control device 10 shifts to the process of step S109 when there is no target person (steps S102, No).
  • the grace area A2 has a donut shape surrounding the essential area A1 is shown, but the grace area A2 may be, for example, a Tobishima shape, and is more robotic than the essential area A1. If it is far from 1, the shape may be changed arbitrarily.
  • each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or part of the device is functionally or physically distributed / physically in any unit according to various loads and usage conditions. It can be integrated and configured.
  • FIG. 7 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of the robot control device 10.
  • the computer 1000 has a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600.
  • Each part of the computer 1000 is connected by a bus 1050.
  • the CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes processing corresponding to various programs.
  • the ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program that depends on the hardware of the computer 1000, and the like.
  • BIOS Basic Input Output System
  • the HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program.
  • the HDD 1400 is a recording medium for recording a program according to the present disclosure, which is an example of program data 1450.
  • the communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet).
  • the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
  • the input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000.
  • the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media).
  • the media is, for example, an optical recording medium such as DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk), a magneto-optical recording medium such as MO (Magneto-Optical disk), a tape medium, a magnetic recording medium, or a semiconductor memory.
  • an optical recording medium such as DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk)
  • a magneto-optical recording medium such as MO (Magneto-Optical disk)
  • tape medium such as DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk)
  • MO Magneto-optical disk
  • the CPU 1100 of the computer 1000 realizes the functions of the identification unit 71 and the like by executing the program loaded on the RAM 1200.
  • the HDD 1400 stores the program related to the present disclosure and the data in the storage unit 6.
  • the CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program, but as another example, these programs may be acquired from another device via the external network 1550.
  • the present technology can also have the following configurations.
  • a detection unit provided around the robot that detects the behavior of the target person in a grace area farther from the robot than the essential area where the robot's evacuation action is essential.
  • a robot control device including a determination unit that determines a pre-action according to the evacuation action that the robot executes for the target person in the grace area based on the behavior detected by the detection unit.
  • the robot A mobile robot, The robot control device according to (1) above.
  • the decision unit When the behavior is a behavior that avoids the robot, an action corresponding to the behavior is determined as the pre-action.
  • the detection unit Detects the behavior indicating that the subject has recognized the robot.
  • the decision unit The action is determined based on the identification result by the identification unit.
  • the robot control device according to any one of (1) to (4) above.
  • the decision unit The magnitude of the action for the target person is determined based on the number of times the target person is identified by the identification unit.
  • the robot control device according to (5) above.
  • a setting unit for setting the grace area for the target person based on the identification result by the identification unit is provided.
  • the setting unit The grace area is set based on the passage width of the passage through which the robot and the target person pass each other.
  • the robot control device according to (7) above.
  • the setting unit The grace area is set based on the moving speed of the target person.
  • the robot control device according to (7) or (8) above.
  • the identification unit Estimate the age of the subject and The setting unit The grace area is set based on the age estimated by the discriminator.
  • the robot control device according to any one of (7) to (9) above.
  • the decision unit The planned movement route of the robot is changed based on the behavior of the target person, and the changed planned movement route is moved to the robot.
  • the robot control device according to any one of (1) to (10) above.
  • the decision unit When the target person enters the essential area, the evacuation action to avoid contact with the target person is determined.
  • the robot control device according to any one of (1) to (11).
  • the decision unit At the position after the evacuation action, the robot is stopped.
  • the robot control device according to (12) above.
  • the decision unit When the behavior of the subject is a behavior that shows interest in the robot, the robot is decelerated or stopped.
  • the robot control device according to any one of (1) to (13).
  • the computer Detects the behavior of the target person in a grace area that is provided around the robot and is farther from the robot than the essential area where the robot's evacuation action is essential. Based on the detected behavior, the robot determines a pre-action according to the evacuation action to be executed for the target person in the grace area.
  • Method. (16) A detection unit in which a computer is installed around the robot and detects the behavior of the target person in a grace area farther from the robot than the essential area where the robot's evacuation action is essential.
  • a program that functions as a determination unit that determines a pre-action according to the evacuation action that the robot executes for the target person in the grace area based on the behavior detected by the detection unit.
  • Robot 10 Robot control device 71 Identification unit 72 Setting unit 73 Detection unit 74 Decision unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

A robot control device (10) comprises: a detection unit (73) that is disposed near a robot (1) and that detects the behavior of a subject (T) in a retreat area (A2) which is farther from the robot (1) than a necessary area (A1) in which a shunting action performed by the robot (1) becomes necessary; and a determination unit (74) that, on the basis of the behavior detected by the detection unit (73), determines an advance action to be performed by the robot (1) with respect to the subject (T) in the retreat area (A2).

Description

ロボット制御装置、方法およびプログラムRobot controls, methods and programs
 本開示は、ロボット制御装置、方法およびプログラムに関する。 This disclosure relates to robot controls, methods and programs.
 移動型ロボットの動作をディスプレイに表示する画像やスピーカから出力する音声によって、かかる動作を周囲の対象者に伝えるロボットがある。 There are robots that convey such movements to surrounding subjects by means of images that display the movements of mobile robots on the display and voices that are output from speakers.
特開2007-196298号公報JP-A-2007-196298
 しかしながら、対象者は、画像や音声を理解するのみならず、ロボットの実際の動きを理解する必要がある。このため、対象者がロボットの動作を容易に把握することが困難となる場合がある。 However, the subject needs to understand not only the image and sound but also the actual movement of the robot. For this reason, it may be difficult for the target person to easily grasp the operation of the robot.
 本願は、上記に鑑みてなされたものであって、対象者にロボットの動作を容易に把握させることができるロボット制御装置、方法およびプログラムを提供することを目的とする。 The present application has been made in view of the above, and an object of the present application is to provide a robot control device, a method, and a program capable of easily grasping the operation of a robot by a target person.
 上記の課題を解決するために、本開示に係る一態様のロボット制御装置は、検出部と、決定部とを備える。前記検出部は、ロボットの周囲に設けられ、前記ロボットの退避アクションが必須となる必須エリアよりも前記ロボットから遠い猶予エリアにおける対象者の挙動を検出する。前記決定部は、前記検出部によって検出された前記挙動に基づいて、前記ロボットが前記猶予エリアの前記対象者に対して実行する前記退避アクションに準じた事前アクションを決定する。 In order to solve the above problems, the robot control device of one aspect according to the present disclosure includes a detection unit and a determination unit. The detection unit is provided around the robot and detects the behavior of the target person in a grace area farther from the robot than an essential area in which the robot's evacuation action is essential. Based on the behavior detected by the detection unit, the determination unit determines a pre-action according to the evacuation action that the robot executes for the target person in the grace area.
 実施形態の一態様によれば、対象者にロボットの動作を容易に把握させることができる。 According to one aspect of the embodiment, the target person can easily grasp the operation of the robot.
実施形態に係るロボット制御装置の概要を示す図(1)である。It is a figure (1) which shows the outline of the robot control device which concerns on embodiment. 実施形態に係るロボット制御装置の概要を示す図(2)である。It is a figure (2) which shows the outline of the robot control device which concerns on embodiment. 実施形態に係るロボット制御装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the robot control device which concerns on embodiment. 実施形態に係る対象者情報の一例を示す図である。It is a figure which shows an example of the subject person information which concerns on embodiment. 実施形態に係る挙動情報の一例を示す図である。It is a figure which shows an example of the behavior information which concerns on embodiment. 対象者に対するアクションの一例を示す図(1)である。It is a figure (1) which shows an example of the action with respect to a target person. 対象者に対するアクションの一例を示す図(2)である。It is a figure (2) which shows an example of the action with respect to a subject person. 実施形態に係るロボット制御装置が実行する処理手順を示すフローチャートである。It is a flowchart which shows the processing procedure which the robot control apparatus which concerns on embodiment executes. ロボット制御装置の機能を実現するコンピュータの一例を示すハードウェア構成図である。It is a hardware block diagram which shows an example of the computer which realizes the function of a robot control device.
 以下に、本開示の実施形態について図面に基づいて詳細に説明する。なお、以下の各実施形態において、同一の部位には同一の符号を付することにより重複する説明を省略する。 The embodiments of the present disclosure will be described in detail below with reference to the drawings. In each of the following embodiments, the same parts are designated by the same reference numerals, so that duplicate description will be omitted.
(実施形態)
[実施形態に係るシステムの構成]
 まず、本開示の一実施形態の概要について説明する。例えば、ロボットの周囲に存在する対象者に対して、ロボットの存在をディスプレイに画像を表示したり、音声を出力したりすることで周知させる技術がある。しかしながら、対象者は、画像や音声を理解するのみならず、ロボットの実際の動きを理解する必要がある。また、対象者からは、ロボットが対象者を認識しているか否かを把握し難い。このように、対象者に対してロボットの動作を容易に把握することが困難であった。
(Embodiment)
[System configuration according to the embodiment]
First, an outline of one embodiment of the present disclosure will be described. For example, there is a technique for informing a target person existing around a robot of the existence of the robot by displaying an image on a display or outputting a sound. However, the subject needs to understand not only the image and sound but also the actual movement of the robot. In addition, it is difficult for the target person to grasp whether or not the robot recognizes the target person. In this way, it was difficult for the subject to easily grasp the movement of the robot.
 本技術思想は、上記の点に着目して発想されたものであり、対象者がロボットに対して取った挙動に対応させて、ロボットのアクションを決定することで、対象者にロボットの動作を容易に把握させることが可能となる。 This technical idea was conceived by paying attention to the above points, and by determining the action of the robot according to the behavior taken by the target person with respect to the robot, the target person is given the robot's movement. It becomes possible to easily grasp.
 まず、図1Aおよび図1Bを用いて、実施形態に係るロボット制御装置の概要について説明する。図1Aおよび図1Bは、実施形態に係るロボット制御装置の概要を示す図である。図1Aに示すように、ロボット制御装置10は、ロボット1内に内蔵され、ロボット1を制御する制御装置である。なお、ロボット制御装置10は、ロボット1の外部に設けられ、遠隔でロボット1を制御するようにしてもよい。本実施形態では、廊下などの通路において、ロボット1と、対象者Tとがすれ違う場合を想定する。 First, the outline of the robot control device according to the embodiment will be described with reference to FIGS. 1A and 1B. 1A and 1B are diagrams showing an outline of the robot control device according to the embodiment. As shown in FIG. 1A, the robot control device 10 is a control device built in the robot 1 and controlling the robot 1. The robot control device 10 may be provided outside the robot 1 and remotely control the robot 1. In the present embodiment, it is assumed that the robot 1 and the target person T pass each other in a passage such as a corridor.
 例えば、ロボット1は、移動型ロボットであり、図1に示す例では、ロボット1が車輪で走行するロボットである場合を示す。ここで、ロボットとは、脚式型であってもよく、飛行型の移動体であってもよい。また、少なくとも1以上のアームを備えていてもよいし、アームを備えていない移動体であってもよい。 For example, the robot 1 is a mobile robot, and in the example shown in FIG. 1, the case where the robot 1 is a robot traveling on wheels is shown. Here, the robot may be a leg type or a flying type moving body. Further, it may be provided with at least one or more arms, or may be a moving body without an arm.
 ロボット制御装置10は、例えば、センサSなどの環境センシングのセンシング結果に基づいて、ロボット1周囲の対象者Tの挙動を検出し、ロボット1の対象者Tに対するアクションを決定する。 The robot control device 10 detects the behavior of the target person T around the robot 1 based on the sensing result of the environment sensing such as the sensor S, and determines the action of the robot 1 with respect to the target person T.
 具体的には、図1Bに示すように、実施形態に係る制御方法では、ロボット1の周囲に、必須エリアA1と、必須エリアA1よりもロボット1から遠い猶予エリアA2とを設ける。 Specifically, as shown in FIG. 1B, in the control method according to the embodiment, an essential area A1 and a grace area A2 farther from the robot 1 than the essential area A1 are provided around the robot 1.
 図1Bに示す例において、必須エリアA1および猶予エリアA2は、それぞれロボット1を中心とする円形のエリアである場合を示す。必須エリアA1は、例えば、対象者Tがロボット1と接触する可能性が高いエリアであり、必須エリアA1に対象者Tが入った場合、ロボット1は対象者Tから退避させる退避アクションを行う。ここで、退避アクションは、ロボット1を対象者Tから遠ざけるアクションである。 In the example shown in FIG. 1B, the essential area A1 and the grace area A2 are circular areas centered on the robot 1, respectively. The essential area A1 is, for example, an area where the target person T is likely to come into contact with the robot 1. When the target person T enters the essential area A1, the robot 1 performs an evacuation action to evacuate from the target person T. Here, the evacuation action is an action that moves the robot 1 away from the target person T.
 猶予エリアA2は、対象者Tとロボット1とが衝突するまでにある程度の時間的猶予があるエリアである。ロボット制御装置10は、猶予エリアA2における対象者Tの挙動に基づいて、ロボット1の上記の退避アクションに準じた事前アクションを決定する。ここで、事前アクションとは、対象者Tの挙動(行動)に対してロボット1が受動的に行うアクションであり、ロボット1が対象者Tを認識していることを対象者Tへ通知するためのアクションである。 The grace area A2 is an area where there is a certain amount of time grace before the target person T and the robot 1 collide. The robot control device 10 determines a pre-action according to the above-mentioned evacuation action of the robot 1 based on the behavior of the target person T in the grace area A2. Here, the pre-action is an action that the robot 1 passively performs with respect to the behavior (behavior) of the target person T, and is for notifying the target person T that the robot 1 recognizes the target person T. Action.
 したがって、実施形態に係る制御方法では、対象者Tが猶予エリアA2において、ロボット1を認識したことを示す挙動を取った場合に、ロボット1側から対象者Tに対してロボット1が対象者Tを認識したことを事前アクションとして行うこととなる。 Therefore, in the control method according to the embodiment, when the target person T takes an action indicating that the robot 1 has been recognized in the grace area A2, the robot 1 has the target person T with respect to the target person T from the robot 1 side. It will be done as a preliminary action that it recognizes.
 例えば、ロボット制御装置10は、猶予エリアA2において、ロボット1を避ける挙動(例えば、進路の変更)を検出した場合、かかる挙動に呼応させたアクションを事前アクションとして決定し、ロボット1に対して実行させる。 For example, when the robot control device 10 detects a behavior of avoiding the robot 1 (for example, a change of course) in the grace area A2, the robot control device 10 determines an action corresponding to the behavior as a preliminary action and executes the action for the robot 1. Let me.
 ここで、対象者Tがロボット1を避ける挙動を取った場合、対象者T側がロボット1を認識していることを意味する。すなわち、対象者Tは、ロボット1を認識したうえで、ロボット1に対して道を譲る挙動を取ったことになる。 Here, when the target person T behaves to avoid the robot 1, it means that the target person T side recognizes the robot 1. That is, the subject T recognizes the robot 1 and then takes the action of giving way to the robot 1.
 そして、ロボット制御装置10は、上記の挙動に対して、ロボット1が対象者Tを認識していることをロボット1の事前アクションとして示すことで、ロボット1が対象者Tを認識していることを対象者Tに直感的に把握させることができる。 Then, the robot control device 10 indicates that the robot 1 recognizes the target person T as a preliminary action of the robot 1 in response to the above behavior, so that the robot 1 recognizes the target person T. Can be intuitively grasped by the subject T.
 ここでの事前アクションの一例として、例えば、対象者Tが変更した進路とは逆側に、ロボット1の進行方向を変更する。すなわち、対象者Tおよびロボット1がそれぞれ道を譲り合うことになる。 As an example of the preliminary action here, for example, the traveling direction of the robot 1 is changed to the opposite side of the course changed by the target person T. That is, the target person T and the robot 1 give way to each other.
 つまり、ロボット制御装置10は、対象者Tの挙動に応答するかのように、アクションを決定することで、少ないアクションで対象者Tに対してロボット1が対象者Tに気付いていることを把握させることが可能となる。このように、事前アクションは、退避アクションと同様に対象者Tからロボット1を退避させるアクションであるものの、退避アクションとは意味合いが異なるアクションである。 That is, the robot control device 10 determines the action as if it responds to the behavior of the target person T, and grasps that the robot 1 is aware of the target person T with respect to the target person T with few actions. It becomes possible to make it. As described above, the pre-action is an action for evacuating the robot 1 from the target person T like the evacuation action, but has a different meaning from the evacuation action.
 したがって、対象者Tは、ロボット1が対象者Tを認識していることが分かれば、ロボット1のその後の行動を気にしなくてよいので、対象者Tの負担を軽減することができる。 Therefore, if it is known that the robot 1 recognizes the target person T, the target person T does not have to worry about the subsequent actions of the robot 1, so that the burden on the target person T can be reduced.
 このように、ロボット制御装置10では、必須エリアA1に加え、猶予エリアA2を設定するとともに、猶予エリアA2における対象者Tの挙動に応じて、対象者Tに対するロボット1の事前アクションを決定する。つまり、実施形態に係る制御方法では、人間同士が互いに道を譲りあうような自然なコミュニケーションを、猶予エリアA2の対象者Tとロボット1とのコミュニケーションに活用する。 In this way, the robot control device 10 sets the grace area A2 in addition to the essential area A1, and determines the pre-action of the robot 1 with respect to the target person T according to the behavior of the target person T in the grace area A2. That is, in the control method according to the embodiment, natural communication in which humans give way to each other is utilized for communication between the target person T in the grace area A2 and the robot 1.
 これにより、ロボット制御装置10では、必須エリアA1よりも遠い猶予エリアA2においてロボット1が対象者Tを認識していることを対象者Tに対して事前に把握させることができる。 As a result, the robot control device 10 can make the target person T know in advance that the robot 1 recognizes the target person T in the grace area A2 farther than the essential area A1.
 したがって、実施形態に係るロボット制御装置10によれば、対象者Tにロボット1の動作を容易に把握させることができる。 Therefore, according to the robot control device 10 according to the embodiment, the target person T can easily grasp the operation of the robot 1.
[実施形態に係るロボット制御装置の構成]
 次に、図2を用いて、実施形態に係るロボット制御装置10の構成例について説明する。図2は、実施形態に係るロボット制御装置10の構成例を示すブロック図である。
[Configuration of robot control device according to the embodiment]
Next, a configuration example of the robot control device 10 according to the embodiment will be described with reference to FIG. FIG. 2 is a block diagram showing a configuration example of the robot control device 10 according to the embodiment.
 図2に示すように、ロボット制御装置10は、遠隔操作受付部2と、入力部3と、出力部4と、駆動部5と、記憶部6と、制御部7とを備える。遠隔操作受付部2は、ロボット1に対する遠隔操作を受け付ける通信ユニットである。 As shown in FIG. 2, the robot control device 10 includes a remote control reception unit 2, an input unit 3, an output unit 4, a drive unit 5, a storage unit 6, and a control unit 7. The remote control reception unit 2 is a communication unit that receives remote control for the robot 1.
 入力部3は、ロボット1周囲の環境センシングのセンシング結果を制御部7へ入力する。図2に示す例において、入力部3は、レーザ測距装置31と、RGBカメラ32と、ステレオカメラ33と、慣性計測装置34とを備える。 The input unit 3 inputs the sensing result of the environment sensing around the robot 1 to the control unit 7. In the example shown in FIG. 2, the input unit 3 includes a laser ranging device 31, an RGB camera 32, a stereo camera 33, and an inertial measurement unit 34.
 レーザ測距装置31は、障害物までの距離を測距する装置であり、赤外線測距装置、超音波測距装置、LiDAR(Laser Imaging Detection and Ranging)などによって構成される。 The laser range finder 31 is a device that measures the distance to an obstacle, and is composed of an infrared range finder, an ultrasonic range finder, LiDAR (Laser Imaging Detection and Ranging), and the like.
 RGBカメラ32は、画像(静止画像又は動画像)を撮像する撮像装置である。ステレオカメラ33は、対象物を複数の方向から撮像することで、対象物までの距離を測定する撮像装置である。慣性計測装置34は、例えば、3軸の角度と、加速度を検出する装置である。 The RGB camera 32 is an imaging device that captures an image (still image or moving image). The stereo camera 33 is an imaging device that measures the distance to an object by imaging the object from a plurality of directions. The inertial measurement unit 34 is, for example, a device that detects angles of three axes and acceleration.
 出力部4は、例えば、ロボット1に設けられ、表示装置やスピーカによって構成される。出力部4は、制御部7から入力される画像や音声を出力する。駆動部5は、アクチュエータによって構成され、制御部7の制御に基づいて、ロボット1を駆動させる。 The output unit 4 is provided on the robot 1, for example, and is composed of a display device and a speaker. The output unit 4 outputs an image or sound input from the control unit 7. The drive unit 5 is composed of an actuator, and drives the robot 1 under the control of the control unit 7.
 記憶部6は、対象者情報61と、モデル情報62と、パラメータ情報63と、挙動情報64と、アクション情報65とを記憶する。 The storage unit 6 stores the target person information 61, the model information 62, the parameter information 63, the behavior information 64, and the action information 65.
 対象者情報61は、対象者Tに関する情報である。本実施形態において、対象者情報61は、対象者Tがロボット1と接した回数や接する頻度に関する情報である。図3は、実施形態に係る対象者情報61の一例を示す図である。 Target person information 61 is information about the target person T. In the present embodiment, the target person information 61 is information regarding the number of times and the frequency of contact with the robot 1 by the target person T. FIG. 3 is a diagram showing an example of the target person information 61 according to the embodiment.
 図3に示すように、対象者情報61は、「対象者ID」と、「特徴量」と、「接触履歴」と、「認知度」などが互いに関連付けられた情報である。「対象者ID」は、対象者Tを識別する識別子である。「特徴量」は、対応する対象者Tの特徴量を示す。例えば、特徴量は、対象者Tの顔の特徴量に関する情報である。 As shown in FIG. 3, the target person information 61 is information in which a "target person ID", a "feature amount", a "contact history", a "recognition degree", and the like are associated with each other. The "target person ID" is an identifier that identifies the target person T. The "feature amount" indicates the feature amount of the corresponding subject T. For example, the feature amount is information about the feature amount of the face of the subject T.
 「接触履歴」は、対応する対象者Tがロボット1と接した履歴に関する情報である。言い換えれば、ここでの接触履歴は、ロボット1が対象者Tを認識した履歴である。例えば、接触履歴には、ロボット1が対象者Tを認識した日時、頻度などに関する情報が登録される。 The "contact history" is information related to the history of the corresponding target person T contacting the robot 1. In other words, the contact history here is a history in which the robot 1 recognizes the target person T. For example, in the contact history, information regarding the date and time, frequency, and the like when the robot 1 recognizes the target person T is registered.
 「認知度」は、対応する対象者Tがロボット1のことについて認知している度合を示す。本実施形態では、接触履歴に基づき、ロボットと接した回数又は接する頻度によって認知度が設定される。 "Awareness" indicates the degree to which the corresponding subject T is aware of the robot 1. In the present embodiment, the recognition level is set according to the number of times of contact with the robot or the frequency of contact with the robot based on the contact history.
 本実施形態においては、認知度は、3段階で表現され、「A」が最も認知度が高いことを示し、「C」が認知度が最も低いことを示すものとする。例えば、認知度「A」は、定常的にロボット1と接していることを示し、認知度「C」である対象者Tは、初めてロボットと接することを示す。つまり、対象者Tがロボット1と接する回数が多くなるにつれて、認知度が高くなるように更新される。 In the present embodiment, the recognition level is expressed in three stages, "A" indicates the highest recognition level, and "C" indicates the lowest recognition level. For example, the recognition level "A" indicates that the robot 1 is in constant contact with the robot 1, and the subject T having the recognition level "C" indicates that the robot T is in contact with the robot for the first time. That is, as the number of times the target person T comes into contact with the robot 1 increases, the recognition level is updated.
 図2の説明に戻り、モデル情報62について説明する。モデル情報62は、画像データから対象者Tの身体的特性を特定するモデルに関する情報である。例えば、モデル情報62は、対象者Tの年齢を推定するモデルに関する情報である。 Returning to the explanation of FIG. 2, the model information 62 will be described. The model information 62 is information about a model that identifies the physical characteristics of the subject T from the image data. For example, the model information 62 is information about a model for estimating the age of the subject T.
 パラメータ情報63は、猶予エリアA2に関する各種パラメータに関する情報である。後述するように、ロボット制御装置10は、対象者Tに応じて、猶予エリアA2を設定することができ、猶予エリアA2を設定するためのパラメータがパラメータ情報63として記憶部6に記憶される。なお、パラメータ情報63の具体例については後述する。 Parameter information 63 is information related to various parameters related to the grace area A2. As will be described later, the robot control device 10 can set the grace area A2 according to the target person T, and the parameters for setting the grace area A2 are stored in the storage unit 6 as parameter information 63. A specific example of the parameter information 63 will be described later.
 挙動情報64は、対象者Tの挙動に関する情報であり、本実施形態において、対象者Tがロボット1を認識したことを示す挙動に関する情報である。図4は、実施形態に係る挙動情報64の一例を示す図である。 The behavior information 64 is information on the behavior of the target person T, and is information on the behavior indicating that the target person T has recognized the robot 1 in the present embodiment. FIG. 4 is a diagram showing an example of the behavior information 64 according to the embodiment.
 図4に示すように、挙動情報64は、「検出ID」と、「検出条件」とが互いに関連付けられた情報である。「検出ID」は、各検出条件を識別する識別子である。「検出条件」は、対象者Tがロボット1を認識後に行う挙動の検出条件である。 As shown in FIG. 4, the behavior information 64 is information in which the "detection ID" and the "detection condition" are associated with each other. The "detection ID" is an identifier that identifies each detection condition. The “detection condition” is a detection condition of the behavior performed by the target person T after recognizing the robot 1.
 図4に示す例では、検出ID「D001」の検出条件に示す進路を変更とは、対象者Tが進路を変更し、ロボット1に対して通路をあけたことを示す。また、検出ID「D002」の検出条件に示す速度の低下とは、対象者Tの移動する速度が低下したことを示し、進行方向を変更してから加速は、ロボット1に対して通路を譲ったうえで、対象者Tが加速したことを示す。 In the example shown in FIG. 4, changing the course shown in the detection condition of the detection ID "D001" means that the subject T has changed the course and opened a passage for the robot 1. Further, the decrease in speed shown in the detection condition of the detection ID "D002" indicates that the moving speed of the subject T has decreased, and the acceleration after changing the traveling direction gives the passage to the robot 1. After that, it is shown that the subject T has accelerated.
 また、検出ID「D003」の検出条件に示す歩調が早くなる、または、遅くなるとは、対象者Tがロボット1の認識前後で、対象者Tの歩調が変わったことを示す。検出ID「D004」の検出条件に示すロボット1に視線を送ったとは、対象者Tがロボット1を見たことを示す、また、検出ID「D004」に示すように、ロボット1とは別の方向に一定時間以上視線を送ったことを検出条件としてもよい。 Further, when the pace shown in the detection condition of the detection ID "D003" becomes faster or slower, it means that the pace of the target person T has changed before and after the recognition of the robot 1. Sending a line of sight to the robot 1 shown in the detection condition of the detection ID "D004" indicates that the subject T has seen the robot 1, and as shown in the detection ID "D004", it is different from the robot 1. The detection condition may be that the line of sight is sent in the direction for a certain period of time or longer.
 例えば、対象者Tがロボット1とは別の方向に一定時間以上視線を送った場合、対象者Tがロボット1との衝突を回避するための経路を探している場合や、対象者Tがロボット1とのコミュニケーションに対して消極的な姿勢を取っている場合(例えば、ロボット1を怖がる)などが想定される。すなわち、このような挙動は、対象者Tがロボット1を認識したうえで行う挙動として捉えることができる。 For example, when the target person T sends a line of sight in a direction different from that of the robot 1 for a certain period of time or longer, the target person T is looking for a route to avoid a collision with the robot 1, or the target person T is a robot. It is assumed that the robot 1 is reluctant to communicate with the robot 1 (for example, is afraid of the robot 1). That is, such a behavior can be grasped as a behavior performed by the target person T after recognizing the robot 1.
 また、検出ID「D004」の検出条件に示す身体の一部の向きが変化とは、対象者が進路を変更していないものの、胴体や腕、ひざ下の向きが変化したことや、上半身がのけぞるなどの挙動を示す。 In addition, the change in the orientation of a part of the body shown in the detection condition of the detection ID "D004" means that the orientation of the torso, arms, and knees has changed, or the upper body has changed, although the subject has not changed the course. Shows behavior such as kneeling.
 図2の説明に戻り、アクション情報65について説明する。アクション情報65は、対象者Tの挙動に対してロボット1が実行するアクションや、アクションを行うタイミングに関する情報である。 Returning to the explanation of FIG. 2, the action information 65 will be explained. The action information 65 is information regarding the action executed by the robot 1 with respect to the behavior of the target person T and the timing of performing the action.
 制御部7は、ロボット制御装置10が備える各構成を制御する機能を有する。また、図2に示すように、制御部7は、識別部71と、設定部72と、検出部73と、決定部74とを備える。 The control unit 7 has a function of controlling each configuration included in the robot control device 10. Further, as shown in FIG. 2, the control unit 7 includes an identification unit 71, a setting unit 72, a detection unit 73, and a determination unit 74.
 識別部71は、例えばRGBカメラ32によって撮像された画像データから対象者を識別する。具体的には、識別部71は、画像データから対象者Tの顔の特徴量を抽出し、抽出した特徴量と、対象者情報61に登録された特徴量とを比較することで、対象者Tを識別する。 The identification unit 71 identifies the target person from the image data captured by, for example, the RGB camera 32. Specifically, the identification unit 71 extracts the feature amount of the face of the target person T from the image data, and compares the extracted feature amount with the feature amount registered in the target person information 61 to obtain the target person. Identify T.
 また、識別部71は、モデル情報62のモデルに基づいて、画像データに写る対象者Tの年齢を推定することも可能である。 The identification unit 71 can also estimate the age of the target person T reflected in the image data based on the model of the model information 62.
 設定部72は、識別部71の識別結果に基づいて、対象者Tに対して設定する猶予エリアA2を設定する。設定部72は、対象者Tの認知度、対象者Tの年齢およびロボット1と対象者Tとがすれ違う通路幅などに基づいて、猶予エリアA2を設定する。 The setting unit 72 sets the grace area A2 to be set for the target person T based on the identification result of the identification unit 71. The setting unit 72 sets the grace area A2 based on the recognition level of the target person T, the age of the target person T, the width of the passage where the robot 1 and the target person T pass each other, and the like.
 なお、ロボット制御装置10は、通路幅の情報については、予め通路幅に関するマップを保持しておくことにしてもよいし、入力部3の検出結果に基づいて、ロボット制御装置10側で算出することにしてもよい。 The robot control device 10 may hold a map regarding the passage width in advance for the information on the passage width, or the robot control device 10 calculates it based on the detection result of the input unit 3. You may decide.
 設定部72は、識別部71の識別結果および通路幅に応じたパラメータをパラメータ情報63から抽出し、抽出したパラメータに基づいて猶予エリアA2を設定する。具体的には、設定部72は、認知度が高いほど、猶予エリアA2を狭くし、認知度が低いほど、猶予エリアA2を広く設定する。 The setting unit 72 extracts parameters according to the identification result of the identification unit 71 and the passage width from the parameter information 63, and sets the grace area A2 based on the extracted parameters. Specifically, the setting unit 72 sets the grace area A2 narrower as the recognition level is higher, and widens the grace area A2 as the recognition level is lower.
 つまり、ロボット1と日常的に接している対象者T、すなわち、ロボット1の動作を十分に理解している対象者Tに対しては、猶予エリアA2を狭くする。これにより、ロボット1による対象者Tへのアクションを控えることができるので、ロボット1の作業効率の低下を抑制する。 That is, the grace area A2 is narrowed for the target person T who is in daily contact with the robot 1, that is, the target person T who fully understands the operation of the robot 1. As a result, it is possible to refrain from the action of the robot 1 on the target person T, so that the decrease in the work efficiency of the robot 1 is suppressed.
 一方、ロボット1とはじめて接する対象者Tがロボット1に対してとる挙動は、対象者Tによって異なり、いつどのような挙動を取るかを予測しがたい。このため、かかる対象者Tについては、猶予エリアA2を広げることで、対象者Tがロボット1を認識後の挙動を確実に検出することが可能となる。 On the other hand, the behavior of the target person T who comes into contact with the robot 1 for the first time with respect to the robot 1 differs depending on the target person T, and it is difficult to predict when and what kind of behavior will be taken. Therefore, for the target person T, by expanding the grace area A2, it becomes possible for the target person T to reliably detect the behavior after recognizing the robot 1.
 具体的には、設定部72は、認知度が「A」の対象者Tに対する猶予エリアA2を基準として、認知度が「B」の場合、+3m、認知度が「C」の場合、+5mなど、認知度が低くなるにつれて猶予エリアA2を拡張する。なお、ここでの+3mや+5mは、猶予エリアA2を拡張する半径を示す。また、以下では、拡張した分のエリアについて、拡張エリアと記載する。 Specifically, the setting unit 72 is + 3 m when the recognition level is "B", + 5 m when the recognition level is "C", etc., based on the grace area A2 for the target person T whose recognition level is "A". , The grace area A2 is expanded as the recognition becomes lower. In addition, + 3m and + 5m here indicate the radius which extends the grace area A2. In addition, in the following, the expanded area will be referred to as an expanded area.
 また、設定部72は、上述の認知度に基づいて設定した拡張エリアを基準として、年齢や、通路幅などに応じた変数を乗算することで、拡張エリアを調節することができる。例えば、対象者Tが子供の場合、ロボット1を認識した段階で、何らかの挙動を取る場合があり、対象者が老人の場合には、成人に比べて視野が狭いことを想定し、成人に比べてロボット1をより近くで認識することが想定される。 Further, the setting unit 72 can adjust the expansion area by multiplying the expansion area set based on the above-mentioned recognition level with variables according to the age, the passage width, and the like. For example, if the subject T is a child, some behavior may be taken at the stage of recognizing the robot 1, and if the subject is an elderly person, it is assumed that the field of view is narrower than that of an adult, and compared to an adult. It is assumed that the robot 1 will be recognized closer.
 このため、例えば、15歳から50歳を「1.0」とし、15歳未満を「1.3」、50歳以上を「0.7」として、上記の拡張エリアに乗算することで猶予エリアA2を設定する。 Therefore, for example, 15 to 50 years old is set to "1.0", under 15 years old is set to "1.3", and 50 years old or older is set to "0.7", and the grace area is multiplied by the above expansion area. Set A2.
 つまり、上記の例では、対象者Tが子供の場合、成人に比べて拡張エリアを広くし、対象者Tが老人の場合に、拡張エリアを狭く設定する場合を示す。このように、対象者Tによって拡張エリアを調節することで、対象者Tがロボット1を認識した後の挙動を適切に検出することが可能となる。 That is, in the above example, when the subject T is a child, the expansion area is set wider than that of an adult, and when the subject T is an elderly person, the expansion area is set narrower. By adjusting the expansion area by the target person T in this way, it is possible to appropriately detect the behavior of the target person T after recognizing the robot 1.
 また、通路幅を考慮する場合、設定部72は、通路幅が1m~3mの場合を「1.0」とし、通路幅が1m未満を「1.3」、通路幅が3m以上を「0,7」とし、上記の拡張エリアに乗算することで猶予エリアA2を設定する。 When considering the passage width, the setting unit 72 sets "1.0" when the passage width is 1 m to 3 m, "1.3" when the passage width is less than 1 m, and "0" when the passage width is 3 m or more. , 7 ”, and the grace area A2 is set by multiplying the above expansion area.
 つまり、通路幅が広いほど拡張エリアを狭くし、通路幅が狭いほど、拡張エリアを広くする。これは、通路幅が狭いほど、対象者Tの視野が限られるので、通路幅が広い場合に比べて、対象者Tがロボット1をより遠くから認識することができるためである。 In other words, the wider the passage width, the narrower the expansion area, and the narrower the passage width, the wider the expansion area. This is because the narrower the passage width, the more limited the field of view of the target person T is, so that the target person T can recognize the robot 1 from a greater distance than when the passage width is wide.
 このように、通路幅に基づいて、拡張エリアを調節することで、環境に応じて適切に猶予エリアA2を設定することが可能となる。 In this way, by adjusting the expansion area based on the passage width, it is possible to appropriately set the grace area A2 according to the environment.
 その他、設定部72は、対象者Tが自動車やオートバイなどの乗り物を操縦している場合や対象者Tが走っている場合など、対象者Tが歩行している場合に比べて移動速度が速い場合、かかる移動速度を考慮して、猶予エリアA2を設定することにしてもよい。 In addition, the setting unit 72 has a faster moving speed than when the target person T is walking, such as when the target person T is manipulating a vehicle such as a car or a motorcycle or when the target person T is running. In this case, the grace area A2 may be set in consideration of the moving speed.
 具体的には、例えば、対象者T(乗り物を含む)の現在の移動速度の平均値を歩行者の一般的な平均速度で除算した値を上記の拡張エリアに乗算することで、移動速度に応じた猶予エリアA2を設定することができる。つまり、移動速度が速いほど、猶予エリアA2を広く設定する。 Specifically, for example, the moving speed is obtained by multiplying the above expansion area by the value obtained by dividing the average value of the current moving speed of the subject T (including the vehicle) by the general average speed of the pedestrian. The corresponding grace area A2 can be set. That is, the faster the moving speed, the wider the grace area A2 is set.
 なお、例えば、対象者Tが複数存在する場合には、上記の認知度を最も低い対象者Tにあわせることにしてもよいし、対象者Tが複数存在する場合には、予期しない挙動を取る可能性が高いので認知度を最も低く設定したうえで、猶予エリアA2を設定することにしてもよい。 For example, when there are a plurality of target persons T, the above-mentioned recognition may be adjusted to the lowest target person T, and when there are a plurality of target persons T, unexpected behavior may be taken. Since there is a high possibility, the grace area A2 may be set after setting the lowest recognition level.
 検出部73は、猶予エリアA2における対象者Tの挙動を検出する。まず、検出部73は、設定部72によって設定された猶予エリアA2に対象者Tが進入したか否かを判定する。 The detection unit 73 detects the behavior of the target person T in the grace area A2. First, the detection unit 73 determines whether or not the target person T has entered the grace area A2 set by the setting unit 72.
 例えば、検出部73は、RGBカメラ32によって撮像された画像データから対象者Tを識別することができ、レーザ測距装置31またはステレオカメラ33の検出結果に基づいて、対象者Tが猶予エリアA2に進入したか否かを判定することができる。 For example, the detection unit 73 can identify the target person T from the image data captured by the RGB camera 32, and the target person T is in the grace area A2 based on the detection result of the laser ranging device 31 or the stereo camera 33. It is possible to determine whether or not the user has entered.
 続いて、検出部73は、対象者Tが猶予エリアA2に進入した場合に、対象者Tの挙動を検出する。具体的には、検出部73は、対象者Tの挙動として、挙動情報64の検出条件に合致する挙動を検出することになる。 Subsequently, the detection unit 73 detects the behavior of the target person T when the target person T enters the grace area A2. Specifically, the detection unit 73 detects the behavior of the target person T that matches the detection condition of the behavior information 64.
 検出部73は、上記の検出条件に合致する挙動を検出した時点で、決定部74へ通知する。一方、検出部73は、対象者Tが検出条件に合致する挙動を取らずに、必須エリアA1に進入した場合、後述する退避アクションの要請を決定部74に行う。 The detection unit 73 notifies the determination unit 74 when it detects a behavior that matches the above detection conditions. On the other hand, when the target person T enters the essential area A1 without taking the behavior that matches the detection condition, the detection unit 73 requests the decision unit 74 for the evacuation action described later.
 決定部74は、検出部73によって検出された対象者Tの挙動に基づいて、猶予エリアA2の対象者Tに対してロボット1が実行する事前アクションを決定する。具体的には、決定部74は、事前アクションの一例として、移動予定経路を変更する。なお、移動予定経路は、ロボット1が移動予定の経路であり、遠隔操作受付部2から取得したり、制御部7で決定したりすることができる。 The determination unit 74 determines the pre-action to be executed by the robot 1 with respect to the target person T in the grace area A2 based on the behavior of the target person T detected by the detection unit 73. Specifically, the determination unit 74 changes the planned movement route as an example of the pre-action. The planned movement route is a route scheduled to be moved by the robot 1, and can be acquired from the remote control reception unit 2 or determined by the control unit 7.
 図5Aおよび図5Bは、実施形態に係るアクションの一例を示す図である。なお、以下では、対象者Tが進行方向D2を変更する挙動を取った場合を例に挙げて説明する。図5Aに示すように、例えば、ロボット1が、移動予定経路D1を移動し、対象者Tが進行方向D2に沿って移動した場合、ロボット1と、対象者Tとが接近し、場合によっては衝突するおそれがある。 5A and 5B are diagrams showing an example of the action according to the embodiment. In the following, a case where the subject T takes a behavior of changing the traveling direction D2 will be described as an example. As shown in FIG. 5A, for example, when the robot 1 moves on the planned movement path D1 and the target person T moves along the traveling direction D2, the robot 1 and the target person T approach each other, and in some cases, the target person T approaches. There is a risk of collision.
 これに対して、図5Aの中段に示すように、対象者Tが猶予エリアA2内で進行方向D2を変更し、ロボット1との接触を回避する経路を取った場合、ロボット制御装置10は、かかる進行方向D2を変更する挙動を検出し、かかる挙動に対するアクションを決定する。 On the other hand, as shown in the middle part of FIG. 5A, when the subject T changes the traveling direction D2 in the grace area A2 and takes a path to avoid contact with the robot 1, the robot control device 10 changes. The behavior that changes the traveling direction D2 is detected, and the action for the behavior is determined.
 ここで、図5Aの中段に示す例では、対象者Tが進行方向D2を変更したので、ロボット1が、本来の移動予定経路D1を走行したとしても、ロボット1は、対象者Tと衝突しない。 Here, in the example shown in the middle of FIG. 5A, since the target person T has changed the traveling direction D2, the robot 1 does not collide with the target person T even if the robot 1 travels on the original planned movement path D1. ..
 しかしながら、対象者T側の心理として、ロボット1が対象者Tを認識しているか否かが分からず、対象者Tは、ロボット1の動作を随時観察し、ロボット1が対象者Tに向かって来ないかを確認しなければならない。 However, as the psychology of the target person T side, it is not known whether or not the robot 1 recognizes the target person T, the target person T observes the movement of the robot 1 at any time, and the robot 1 heads toward the target person T. I have to make sure it doesn't come.
 これに対して、決定部74は、ロボット1の移動予定経路D1を変更し、ロボット1を変更後の移動予定経路D1に沿って移動させる。これにより、ロボット1が対象者Tを認識していることを対象者Tに把握させることが可能となる。 On the other hand, the determination unit 74 changes the planned movement route D1 of the robot 1 and moves the robot 1 along the changed planned movement route D1. This makes it possible for the target person T to know that the robot 1 recognizes the target person T.
 具体的には、図5Aの下段に示すように、移動予定経路D1について、対象者Tがロボット1を回避した方向とは逆方向に変更する。図5Aの例では、対象者Tが進行方向に向かって右側に進行方向D2を変えたため、ロボット1もまた進行方向に向かって右側に移動予定経路D1を変更した場合を示す。 Specifically, as shown in the lower part of FIG. 5A, the planned movement route D1 is changed in the direction opposite to the direction in which the target person T avoids the robot 1. In the example of FIG. 5A, since the subject T changes the traveling direction D2 to the right in the traveling direction, the robot 1 also changes the planned movement path D1 to the right in the traveling direction.
 このように、対象者Tがロボット1に対して通路をあけるアクションに呼応させて事前アクションを決定し、ロボット1に実行させる。これにより、対象者Tに対して、直感的にロボット1のアクションを認識させることが可能となる。 In this way, the target person T determines the pre-action in response to the action of opening the passage for the robot 1, and causes the robot 1 to execute it. As a result, the target person T can intuitively recognize the action of the robot 1.
 つまり、画像や音声を用いずとも、簡便なアクションのみでロボット1の動作を対象者Tに伝えることが可能となる。なお、ここでは、移動予定経路D1を変更する場合について示したが、例えば、ロボット1が人型ロボットである場合、対象者Tに対して会釈するアクションなど、対象者Tの挙動に応じたアクションを実行させることにしてもよい。 That is, it is possible to convey the operation of the robot 1 to the target person T only by a simple action without using images or sounds. In addition, although the case of changing the planned movement route D1 is shown here, for example, when the robot 1 is a humanoid robot, an action according to the behavior of the target person T, such as an action of giving a bow to the target person T. May be executed.
 なお、この際、決定部74は、対象者Tの認識度に基づいて、アクションの大きさを決定することにしてもよい。つまり、決定部74は、対象者Tの認知度が高いほど、アクションを小さくし、認知度が低いほど、アクションを大きくする。 At this time, the determination unit 74 may determine the size of the action based on the recognition level of the target person T. That is, the determination unit 74 reduces the action as the recognition of the subject T is higher, and increases the action as the recognition is lower.
 例えば、決定部74は、対象者Tの認識度が「B」である場合を基準として、認識度が「A」である場合、移動予定経路D1の変更を最小限に止め、対象者Tの認識度が「C」である場合、移動予定経路D1を大きく変更する。 For example, the determination unit 74 minimizes the change of the planned movement route D1 when the recognition degree is "A" based on the case where the recognition degree of the target person T is "B", and the determination unit 74 of the target person T. When the recognition degree is "C", the planned movement route D1 is significantly changed.
 つまり、対象者Tの認識度が低い場合には、大きく迂回する経路を移動予定経路D1とする一方、対象者Tの認識度が高い場合には、小さく迂回する経路を移動予定経路D1と決定する。 That is, when the recognition level of the target person T is low, the route to be largely detoured is determined to be the planned movement route D1, while when the recognition level of the target person T is high, the route to be detoured small is determined to be the planned movement route D1. To do.
 また、例えば、対象者Tによっては、ロボット1を発見した場合に、回避行動をとらずに、ロボット1に近づいてくることも想定される。このため、決定部74は、対象者Tがロボット1に興味を示す挙動が検出された場合、例えば、ロボット1の事前アクションとして減速や停止させ、対象者Tの安全を担保するようにしてもよい。例えば、ここでの挙動は、対象者Tの歩調が早くなる、対象者Tがロボット1に対して所定時間以上視線を送るなどが挙げられる。また、この場合においては、必須エリアA1を無くすまたは小さくすることで、かかる対象者Tに対して、退避アクションを行わないようにすることにしてもよい。 Further, for example, depending on the target person T, when the robot 1 is found, it is assumed that the robot 1 approaches the robot 1 without taking an evasive action. Therefore, when the behavior in which the target person T is interested in the robot 1 is detected, the determination unit 74 may, for example, decelerate or stop the robot 1 as a preliminary action to ensure the safety of the target person T. Good. For example, the behavior here includes that the pace of the subject T becomes faster, the subject T sends a line of sight to the robot 1 for a predetermined time or longer, and the like. Further, in this case, the essential area A1 may be eliminated or reduced so that the evacuation action is not performed on the target person T.
 また、この場合、図5Aの下段に示すように、必須エリアA1の小さくすることにしてもよい。これは、対象者Tが進行方向D2を変更することで、ロボット1との衝突を回避したものの、必須エリアA1が十分に大きい場合、対象者Tが必須エリアA1を通過するおそれがあるためである。 Further, in this case, as shown in the lower part of FIG. 5A, the essential area A1 may be made smaller. This is because the target person T changes the traveling direction D2 to avoid a collision with the robot 1, but if the essential area A1 is sufficiently large, the target person T may pass through the essential area A1. is there.
 つまり、対象者Tとロボット1とがそれぞれ互いに認識しているため、実際には衝突する可能性が低いものの、対象者Tが必須エリアA1内を通過すると、ロボット1が退避アクションを行うことになる。 That is, since the target person T and the robot 1 recognize each other, the possibility of actual collision is low, but when the target person T passes through the essential area A1, the robot 1 performs an evacuation action. Become.
 図5Aの例では、必須エリアA1が、ロボット1の進行方向を長径とする楕円形状であり、元の必須エリアA1の半径に比べて、短径を短くしている。つまり、変更後の必須エリアA1は、対象者Tがロボット1の側方を通過したとしても、対象者Tが必須エリアA1を通過し難い形状としている。 In the example of FIG. 5A, the essential area A1 has an elliptical shape with the traveling direction of the robot 1 as the major axis, and the minor axis is shorter than the radius of the original essential area A1. That is, the changed essential area A1 has a shape that makes it difficult for the target person T to pass through the essential area A1 even if the target person T passes by the side of the robot 1.
 このように、猶予エリアA2において、所定の挙動をとった対象者に対して、必須エリアA1を小さくすることで、対象者Tに対する不要な退避アクションを抑制することが可能となる。なお、この場合において、対象者Tに対する必須エリアA1をなくすことにしてもよい。 In this way, in the grace area A2, by making the essential area A1 smaller for the target person who has taken a predetermined behavior, it is possible to suppress an unnecessary evacuation action for the target person T. In this case, the essential area A1 for the target person T may be eliminated.
 続いて、対象者Tがロボット1に気づかずに、必須エリアA1に進入してきた場合を想定する。すなわち、対象者Tが上述の検出条件に合致する挙動を取らなかった場合を想定する。この場合、決定部74は、ロボット1に対して対象者Tから退避させる退避アクションを実行させる。具体的には、図5Bの中段に示すように、例えば、対象者Tが進路変更などをせずに、必須エリアA1に進入した場合、決定部74は、対象者Tとの衝突を退避する移動予定経路D1を算出する。 Subsequently, it is assumed that the target person T enters the essential area A1 without noticing the robot 1. That is, it is assumed that the subject T does not behave in accordance with the above-mentioned detection conditions. In this case, the determination unit 74 causes the robot 1 to execute an evacuation action to evacuate from the target person T. Specifically, as shown in the middle of FIG. 5B, for example, when the target person T enters the essential area A1 without changing the course, the determination unit 74 evacuates the collision with the target person T. The planned movement route D1 is calculated.
 図5Bの下段に示す例では、対象者Tがロボット1に対して紙面右側から接近しているで、紙面左側に移動予定経路D1を変更する場合を示す。例えば、決定部74は、変更後の移動予定経路D1に沿ってロボット1を移動させ、待機するようにアクションを退避アクションとして実行させる。これにより、対象者Tから退避させることが可能となる。 In the example shown in the lower part of FIG. 5B, since the target person T is approaching the robot 1 from the right side of the paper surface, the case where the planned movement route D1 is changed to the left side of the paper surface is shown. For example, the determination unit 74 moves the robot 1 along the changed planned movement path D1 and executes the action as an evacuation action so as to stand by. As a result, it is possible to evacuate from the target person T.
 つまり、猶予エリアA2における対象者Tへのアクションは、ロボット1が対象者Tを認識していることを通知するものであるのに対して、必須エリアA1における対象者Tへのアクションは、対象者Tからロボット1を退避させるためのものである。 That is, the action to the target person T in the grace area A2 notifies that the robot 1 recognizes the target person T, whereas the action to the target person T in the essential area A1 is the target. This is for retracting the robot 1 from the person T.
 なお、決定部74は、必須エリアA1に対象者Tが進入した場合に、警告画像や警告音を出力部4から出力することで、ロボット1の存在を対象者Tに対して周知させることにしてもよい。 When the target person T enters the essential area A1, the determination unit 74 outputs a warning image and a warning sound from the output unit 4 to inform the target person T of the existence of the robot 1. You may.
 次に、図6を用いて、実施形態に係るロボット制御装置10が実行する処理手順について説明する。図6は、実施形態に係るロボット制御装置10が実行する処理手順を示すフローチャートである。 Next, the processing procedure executed by the robot control device 10 according to the embodiment will be described with reference to FIG. FIG. 6 is a flowchart showing a processing procedure executed by the robot control device 10 according to the embodiment.
 図6に示すように、ロボット制御装置10は、まず、入力部3のセンシング結果を取得すると(ステップS101)、ロボット1と接触する可能性のある対象者Tが存在するか否かを判定する(ステップS102)。 As shown in FIG. 6, when the robot control device 10 first acquires the sensing result of the input unit 3 (step S101), the robot control device 10 determines whether or not there is a target person T who may come into contact with the robot 1. (Step S102).
 ロボット制御装置10は、ステップS102の判定において、対象者Tが存在すると判定した場合(ステップS102,Yes)、かかる対象者Tを識別する(ステップS103)。 When the robot control device 10 determines that the target person T exists in the determination in step S102 (steps S102, Yes), the robot control device 10 identifies the target person T (step S103).
 続いて、ロボット制御装置10は、識別した対象者Tのロボット1に対する認知度に基づいて、猶予エリアA2を設定し(ステップS104)、対象者Tの年齢などに基づいて、拡張エリアを調節する(ステップS105)。 Subsequently, the robot control device 10 sets the grace area A2 based on the recognition level of the identified target person T for the robot 1 (step S104), and adjusts the expansion area based on the age of the target person T and the like. (Step S105).
 その後、ロボット制御装置10は、猶予エリアA2において、上記の検出条件を満たす挙動を検出したか否かを判定し(ステップS106)、検出条件を満たす挙動を検出した場合(ステップS106)、対象者Tに対する事前アクションを決定する(ステップS107)。なお、ここでの事前アクションは、例えば、移動予定経路D1の変更などが挙げられる。 After that, the robot control device 10 determines whether or not the behavior satisfying the above detection condition is detected in the grace area A2 (step S106), and when the behavior satisfying the detection condition is detected (step S106), the target person. The pre-action for T is determined (step S107). The pre-action here includes, for example, changing the planned movement route D1.
 続いて、ロボット制御装置10は、ステップS107において決定した事前アクションを実行した後に(ステップS108)、本来のタスクを実行し(ステップS109)、処理を終了する。 Subsequently, the robot control device 10 executes the original task (step S109) after executing the pre-action determined in step S107 (step S108), and ends the process.
 また、ロボット制御装置10は、ステップS106の判定において、検出条件を満たす挙動を検出していない場合(ステップS106,No)、対象者Tが必須エリアA1に進入したか否かを判定する(ステップS110)。 Further, when the robot control device 10 does not detect the behavior satisfying the detection condition in the determination in step S106 (step S106, No), the robot control device 10 determines whether or not the target person T has entered the essential area A1 (step). S110).
 ロボット制御装置10は、対象者Tが必須エリアA1に進入した場合(ステップS110,Yes)、退避アクションを実行し(ステップS111)、ステップS109の処理へ移行する。 When the target person T enters the essential area A1 (step S110, Yes), the robot control device 10 executes the evacuation action (step S111), and proceeds to the process of step S109.
 また、ロボット制御装置10は、ステップS102の判定において、対象者がいなかった場合(ステップS102,No)、ステップS109の処理へ移行することになる。 Further, in the determination of step S102, the robot control device 10 shifts to the process of step S109 when there is no target person (steps S102, No).
 ところで、上述した実施形態では、猶予エリアA2が必須エリアA1周囲を囲んだドーナツ形状である場合について示したが、猶予エリアA2は、例えば、飛島状であってもよく、必須エリアA1よりもロボット1から遠ければ、任意に形状を変更することにしてもよい。 By the way, in the above-described embodiment, the case where the grace area A2 has a donut shape surrounding the essential area A1 is shown, but the grace area A2 may be, for example, a Tobishima shape, and is more robotic than the essential area A1. If it is far from 1, the shape may be changed arbitrarily.
 上記各実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。 Of the processes described in each of the above embodiments, all or part of the processes described as being automatically performed can be performed manually, or all the processes described as being performed manually. Alternatively, a part thereof can be automatically performed by a known method. In addition, the processing procedure, specific name, and information including various data and parameters shown in the above document and drawings can be arbitrarily changed unless otherwise specified. For example, the various information shown in each figure is not limited to the illustrated information.
 また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。 Further, each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or part of the device is functionally or physically distributed / physically in any unit according to various loads and usage conditions. It can be integrated and configured.
 上述してきた各実施形態に係るロボット制御装置、HMD、コントローラ等の情報機器は、例えば図7に示すような構成のコンピュータ1000によって実現される。以下、実施形態に係るロボット制御装置10を例に挙げて説明する。図7は、ロボット制御装置10の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェイス1500、及び入出力インターフェイス1600を有する。コンピュータ1000の各部は、バス1050によって接続される。 The information devices such as the robot control device, HMD, and controller according to each of the above-described embodiments are realized by, for example, the computer 1000 having the configuration shown in FIG. 7. Hereinafter, the robot control device 10 according to the embodiment will be described as an example. FIG. 7 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of the robot control device 10. The computer 1000 has a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600. Each part of the computer 1000 is connected by a bus 1050.
 CPU1100は、ROM1300又はHDD1400に格納されたプログラムに基づいて動作し、各部の制御を行う。例えば、CPU1100は、ROM1300又はHDD1400に格納されたプログラムをRAM1200に展開し、各種プログラムに対応した処理を実行する。 The CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes processing corresponding to various programs.
 ROM1300は、コンピュータ1000の起動時にCPU1100によって実行されるBIOS(Basic Input Output System)等のブートプログラムや、コンピュータ1000のハードウェアに依存するプログラム等を格納する。 The ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program that depends on the hardware of the computer 1000, and the like.
 HDD1400は、CPU1100によって実行されるプログラム、及び、かかるプログラムによって使用されるデータ等を非一時的に記録する、コンピュータが読み取り可能な記録媒体である。具体的には、HDD1400は、プログラムデータ1450の一例である本開示に係るプログラムを記録する記録媒体である。 The HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program. Specifically, the HDD 1400 is a recording medium for recording a program according to the present disclosure, which is an example of program data 1450.
 通信インターフェイス1500は、コンピュータ1000が外部ネットワーク1550(例えばインターネット)と接続するためのインターフェイスである。例えば、CPU1100は、通信インターフェイス1500を介して、他の機器からデータを受信したり、CPU1100が生成したデータを他の機器へ送信したりする。 The communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet). For example, the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
 入出力インターフェイス1600は、入出力デバイス1650とコンピュータ1000とを接続するためのインターフェイスである。例えば、CPU1100は、入出力インターフェイス1600を介して、キーボードやマウス等の入力デバイスからデータを受信する。また、CPU1100は、入出力インターフェイス1600を介して、ディスプレイやスピーカやプリンタ等の出力デバイスにデータを送信する。また、入出力インターフェイス1600は、所定の記録媒体(メディア)に記録されたプログラム等を読み取るメディアインターフェイスとして機能してもよい。メディアとは、例えばDVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto-Optical disk)等の光磁気記録媒体、テープ媒体、磁気記録媒体、または半導体メモリ等である。 The input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000. For example, the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media). The media is, for example, an optical recording medium such as DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk), a magneto-optical recording medium such as MO (Magneto-Optical disk), a tape medium, a magnetic recording medium, or a semiconductor memory. Is.
 例えば、コンピュータ1000が実施形態に係るロボット制御装置10として機能する場合、コンピュータ1000のCPU1100は、RAM1200上にロードされたプログラムを実行することにより、識別部71等の機能を実現する。また、HDD1400には、本開示に係るプログラムや、記憶部6内のデータが格納される。なお、CPU1100は、プログラムデータ1450をHDD1400から読み取って実行するが、他の例として、外部ネットワーク1550を介して、他の装置からこれらのプログラムを取得してもよい。 For example, when the computer 1000 functions as the robot control device 10 according to the embodiment, the CPU 1100 of the computer 1000 realizes the functions of the identification unit 71 and the like by executing the program loaded on the RAM 1200. Further, the HDD 1400 stores the program related to the present disclosure and the data in the storage unit 6. The CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program, but as another example, these programs may be acquired from another device via the external network 1550.
 なお、本技術は以下のような構成も取ることができる。
(1)
 ロボットの周囲に設けられ、前記ロボットの退避アクションが必須となる必須エリアよりも前記ロボットから遠い猶予エリアにおける対象者の挙動を検出する検出部と、
 前記検出部によって検出された前記挙動に基づいて、前記ロボットが前記猶予エリアの前記対象者に対して実行する前記退避アクションに準じた事前アクションを決定する決定部と
 を備える、ロボット制御装置。
(2)
 前記ロボットは、
 移動型ロボットである、
 前記(1)に記載のロボット制御装置。
(3)
 前記決定部は、
 前記挙動が前記ロボットを避ける挙動である場合に、当該挙動に呼応させたアクションを前記事前アクションとして決定する、
 前記(2)に記載のロボット制御装置。
(4)
 前記検出部は、
 前記対象者が前記ロボットを認識したことを示す前記挙動を検出する、
 前記(3)に記載のロボット制御装置。
(5)
 前記対象者を識別する識別部
 を備え、
 前記決定部は、
 前記識別部による識別結果に基づいて、前記アクションを決定する、
 前記(1)~(4)のいずれかに記載のロボット制御装置。
(6)
 前記決定部は、
 前記識別部による前記対象者の識別回数に基づいて、当該対象者に対する前記アクションの大きさを決定する、
 前記(5)に記載のロボット制御装置。
(7)
 前記識別部による識別結果に基づいて、前記対象者に対する前記猶予エリアを設定する設定部
 を備える、
 前記(5)または(6)に記載のロボット制御装置。
(8)
 前記設定部は、
 前記ロボットと前記対象者とがすれ違う通路の通路幅に基づいて、前記猶予エリアを設定する、
 前記(7)に記載のロボット制御装置。
(9)
 前記設定部は、
 前記対象者の移動速度に基づいて、前記猶予エリアを設定する、
 前記(7)または(8)に記載のロボット制御装置。
(10)
 前記識別部は、
 前記対象者の年齢を推定し、
 前記設定部は、
 前記識別部によって推定された前記年齢に基づいて、前記猶予エリアを設定する、
 前記(7)~(9)のいずれかに記載のロボット制御装置。
(11)
 前記決定部は、
 前記ロボットの移動予定経路を前記対象者の挙動に基づいて変更し、変更後の前記移動予定経路を前記ロボットに移動させる、
 前記(1)~(10)のいずれかに記載のロボット制御装置。
(12)
 前記決定部は、
 前記対象者が前記必須エリアに進入した場合に、前記対象者との接触を避ける退避アクションに決定する、
 前記(1)~(11)のいずれかに記載のロボット制御装置。
(13)
 前記決定部は、
 前記退避アクション後の位置で、前記ロボットを停止させる、
 前記(12)に記載のロボット制御装置。
(14)
 前記決定部は、
 前記対象者の挙動が前記ロボットに興味を示す挙動である場合、前記ロボットを減速または停止させる、
 前記(1)~(13)のいずれかに記載のロボット制御装置。
(15)
 コンピュータが、
 ロボットの周囲に設けられ、前記ロボットの退避アクションが必須となる必須エリアよりも前記ロボットから遠い猶予エリアにおける対象者の挙動を検出し、
 検出した前記挙動に基づいて、前記ロボットが前記猶予エリアの前記対象者に対して実行する前記退避アクションに準じた事前アクションを決定する、
 方法。
(16)
 コンピュータを
 ロボットの周囲に設けられ、前記ロボットの退避アクションが必須となる必須エリアよりも前記ロボットから遠い猶予エリアにおける対象者の挙動を検出する検出部と、
 前記検出部によって検出された前記挙動に基づいて、前記ロボットが前記猶予エリアの前記対象者に対して実行する前記退避アクションに準じた事前アクションを決定する決定部と
 として機能させる、プログラム。
The present technology can also have the following configurations.
(1)
A detection unit provided around the robot that detects the behavior of the target person in a grace area farther from the robot than the essential area where the robot's evacuation action is essential.
A robot control device including a determination unit that determines a pre-action according to the evacuation action that the robot executes for the target person in the grace area based on the behavior detected by the detection unit.
(2)
The robot
A mobile robot,
The robot control device according to (1) above.
(3)
The decision unit
When the behavior is a behavior that avoids the robot, an action corresponding to the behavior is determined as the pre-action.
The robot control device according to (2) above.
(4)
The detection unit
Detects the behavior indicating that the subject has recognized the robot.
The robot control device according to (3) above.
(5)
It is equipped with an identification unit that identifies the target person.
The decision unit
The action is determined based on the identification result by the identification unit.
The robot control device according to any one of (1) to (4) above.
(6)
The decision unit
The magnitude of the action for the target person is determined based on the number of times the target person is identified by the identification unit.
The robot control device according to (5) above.
(7)
A setting unit for setting the grace area for the target person based on the identification result by the identification unit is provided.
The robot control device according to (5) or (6) above.
(8)
The setting unit
The grace area is set based on the passage width of the passage through which the robot and the target person pass each other.
The robot control device according to (7) above.
(9)
The setting unit
The grace area is set based on the moving speed of the target person.
The robot control device according to (7) or (8) above.
(10)
The identification unit
Estimate the age of the subject and
The setting unit
The grace area is set based on the age estimated by the discriminator.
The robot control device according to any one of (7) to (9) above.
(11)
The decision unit
The planned movement route of the robot is changed based on the behavior of the target person, and the changed planned movement route is moved to the robot.
The robot control device according to any one of (1) to (10) above.
(12)
The decision unit
When the target person enters the essential area, the evacuation action to avoid contact with the target person is determined.
The robot control device according to any one of (1) to (11).
(13)
The decision unit
At the position after the evacuation action, the robot is stopped.
The robot control device according to (12) above.
(14)
The decision unit
When the behavior of the subject is a behavior that shows interest in the robot, the robot is decelerated or stopped.
The robot control device according to any one of (1) to (13).
(15)
The computer
Detects the behavior of the target person in a grace area that is provided around the robot and is farther from the robot than the essential area where the robot's evacuation action is essential.
Based on the detected behavior, the robot determines a pre-action according to the evacuation action to be executed for the target person in the grace area.
Method.
(16)
A detection unit in which a computer is installed around the robot and detects the behavior of the target person in a grace area farther from the robot than the essential area where the robot's evacuation action is essential.
A program that functions as a determination unit that determines a pre-action according to the evacuation action that the robot executes for the target person in the grace area based on the behavior detected by the detection unit.
  1  ロボット
 10  ロボット制御装置
 71  識別部
 72  設定部
 73  検出部
 74  決定部
1 Robot 10 Robot control device 71 Identification unit 72 Setting unit 73 Detection unit 74 Decision unit

Claims (16)

  1.  ロボットの周囲に設けられ、前記ロボットの退避アクションが必須となる必須エリアよりも前記ロボットから遠い猶予エリアにおける対象者の挙動を検出する検出部と、
     前記検出部によって検出された前記挙動に基づいて、前記猶予エリアの前記対象者に対して実行する前記退避アクションに準じた事前アクションを決定する決定部と
     を備える、ロボット制御装置。
    A detection unit provided around the robot that detects the behavior of the target person in a grace area farther from the robot than the essential area where the robot's evacuation action is essential.
    A robot control device including a determination unit that determines a pre-action according to the evacuation action to be executed for the target person in the grace area based on the behavior detected by the detection unit.
  2.  前記ロボットは、
     移動型ロボットである、
     請求項1に記載のロボット制御装置。
    The robot
    A mobile robot,
    The robot control device according to claim 1.
  3.  前記決定部は、
     前記挙動が前記ロボットを避ける挙動である場合に、当該挙動に呼応させたアクションを前記事前アクションとして決定する、
     請求項2に記載のロボット制御装置。
    The decision unit
    When the behavior is a behavior that avoids the robot, an action corresponding to the behavior is determined as the pre-action.
    The robot control device according to claim 2.
  4.  前記検出部は、
     前記対象者が前記ロボットを認識したことを示す前記挙動を検出する、
     請求項3に記載のロボット制御装置。
    The detection unit
    Detects the behavior indicating that the subject has recognized the robot.
    The robot control device according to claim 3.
  5.  前記対象者を識別する識別部
     を備え、
     前記決定部は、
     前記識別部による識別結果に基づいて、前記事前アクションを決定する、
     請求項1に記載のロボット制御装置。
    It is equipped with an identification unit that identifies the target person.
    The decision unit
    The pre-action is determined based on the identification result by the identification unit.
    The robot control device according to claim 1.
  6.  前記決定部は、
     前記識別部による前記対象者の識別回数に基づいて、当該対象者に対する前記事前アクションの大きさを決定する、
     請求項5に記載のロボット制御装置。
    The decision unit
    The magnitude of the pre-action for the target person is determined based on the number of times the target person is identified by the identification unit.
    The robot control device according to claim 5.
  7.  前記識別部による識別結果に基づいて、前記対象者に対する前記猶予エリアを設定する設定部
     を備える、請求項5に記載のロボット制御装置。
    The robot control device according to claim 5, further comprising a setting unit for setting the grace area for the target person based on the identification result by the identification unit.
  8.  前記設定部は、
     前記ロボットと前記対象者とがすれ違う通路の通路幅に基づいて、前記猶予エリアを設定する、
     請求項7に記載のロボット制御装置。
    The setting unit
    The grace area is set based on the passage width of the passage through which the robot and the target person pass each other.
    The robot control device according to claim 7.
  9.  前記設定部は、
     前記対象者の移動速度に基づいて、前記猶予エリアを設定する、
     請求項7に記載のロボット制御装置。
    The setting unit
    The grace area is set based on the moving speed of the target person.
    The robot control device according to claim 7.
  10.  前記識別部は、
     前記対象者の年齢を推定し、
     前記設定部は、
     前記識別部によって推定された前記年齢に基づいて、前記猶予エリアを設定する、
     請求項7に記載のロボット制御装置。
    The identification unit
    Estimate the age of the subject and
    The setting unit
    The grace area is set based on the age estimated by the discriminator.
    The robot control device according to claim 7.
  11.  前記決定部は、
     前記ロボットの移動予定経路を前記対象者の挙動に基づいて変更し、変更後の前記移動予定経路を前記ロボットに移動させる、
     請求項1に記載のロボット制御装置。
    The decision unit
    The planned movement route of the robot is changed based on the behavior of the target person, and the changed planned movement route is moved to the robot.
    The robot control device according to claim 1.
  12.  前記決定部は、
     前記対象者が前記必須エリアに進入した場合に、前記対象者との接触を避ける退避アクションを実行させる、
     請求項1に記載のロボット制御装置。
    The decision unit
    When the target person enters the required area, an evacuation action for avoiding contact with the target person is executed.
    The robot control device according to claim 1.
  13.  前記決定部は、
     前記退避アクション後の位置で、前記ロボットを停止させる、
     請求項12に記載のロボット制御装置。
    The decision unit
    At the position after the evacuation action, the robot is stopped.
    The robot control device according to claim 12.
  14.  前記決定部は、
     前記対象者の挙動が前記ロボットに興味を示す挙動である場合、前記事前アクションとして前記ロボットを減速または停止させる、
     請求項1に記載のロボット制御装置。
    The decision unit
    When the behavior of the target person is a behavior that shows interest in the robot, the robot is decelerated or stopped as the pre-action.
    The robot control device according to claim 1.
  15.  コンピュータが、
     ロボットの周囲に設けられ、前記ロボットの退避アクションが必須となる必須エリアよりも前記ロボットから遠い猶予エリアにおける対象者の挙動を検出し、
     検出した前記挙動に基づいて、前記ロボットが前記猶予エリアの前記対象者に対して実行する前記退避アクションに準じた事前アクションを決定する、
     方法。
    The computer
    Detects the behavior of the target person in a grace area that is provided around the robot and is farther from the robot than the essential area where the robot's evacuation action is essential.
    Based on the detected behavior, the robot determines a pre-action according to the evacuation action to be executed for the target person in the grace area.
    Method.
  16.  コンピュータを
     ロボットの周囲に設けられ、前記ロボットの退避アクションが必須となる必須エリアよりも前記ロボットから遠い猶予エリアにおける対象者の挙動を検出する検出部と、
     前記検出部によって検出された前記挙動に基づいて、前前記ロボットが前記猶予エリアの前記対象者に対して実行する前記退避アクションに準じた事前アクションを決定する決定部と
     として機能させる、プログラム。
    A detection unit in which a computer is installed around the robot and detects the behavior of the target person in a grace area farther from the robot than the essential area where the robot's evacuation action is essential.
    A program that functions as a determination unit that determines a pre-action according to the evacuation action that the robot pre-executes on the target person in the grace area based on the behavior detected by the detection unit.
PCT/JP2020/025742 2019-08-14 2020-07-01 Robot control device, method, and program WO2021029151A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019148941 2019-08-14
JP2019-148941 2019-08-14

Publications (1)

Publication Number Publication Date
WO2021029151A1 true WO2021029151A1 (en) 2021-02-18

Family

ID=74570582

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/025742 WO2021029151A1 (en) 2019-08-14 2020-07-01 Robot control device, method, and program

Country Status (1)

Country Link
WO (1) WO2021029151A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006092253A (en) * 2004-09-24 2006-04-06 Matsushita Electric Works Ltd Autonomous movement device
JP2008065755A (en) * 2006-09-11 2008-03-21 Hitachi Ltd Mobile device
JP2008254134A (en) * 2007-04-06 2008-10-23 Honda Motor Co Ltd Moving device, its control method and control program
JP2009110495A (en) * 2007-04-12 2009-05-21 Panasonic Corp Autonomous mobile device, and control device and program for the autonomous mobile device
WO2013046563A1 (en) * 2011-09-29 2013-04-04 パナソニック株式会社 Autonomous motion device, autonomous motion method, and program for autonomous motion device
WO2014148051A1 (en) * 2013-03-21 2014-09-25 パナソニック株式会社 Method and device for performing autonomous traveling control on autonomously traveling device, and program for autonomous-travel controller
JP2015066621A (en) * 2013-09-27 2015-04-13 株式会社国際電気通信基礎技術研究所 Robot control system, robot, output control program and output control method
JP2019084641A (en) * 2017-11-08 2019-06-06 学校法人早稲田大学 Autonomous mobile robot, and control device and operation control program of the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006092253A (en) * 2004-09-24 2006-04-06 Matsushita Electric Works Ltd Autonomous movement device
JP2008065755A (en) * 2006-09-11 2008-03-21 Hitachi Ltd Mobile device
JP2008254134A (en) * 2007-04-06 2008-10-23 Honda Motor Co Ltd Moving device, its control method and control program
JP2009110495A (en) * 2007-04-12 2009-05-21 Panasonic Corp Autonomous mobile device, and control device and program for the autonomous mobile device
WO2013046563A1 (en) * 2011-09-29 2013-04-04 パナソニック株式会社 Autonomous motion device, autonomous motion method, and program for autonomous motion device
WO2014148051A1 (en) * 2013-03-21 2014-09-25 パナソニック株式会社 Method and device for performing autonomous traveling control on autonomously traveling device, and program for autonomous-travel controller
JP2015066621A (en) * 2013-09-27 2015-04-13 株式会社国際電気通信基礎技術研究所 Robot control system, robot, output control program and output control method
JP2019084641A (en) * 2017-11-08 2019-06-06 学校法人早稲田大学 Autonomous mobile robot, and control device and operation control program of the same

Similar Documents

Publication Publication Date Title
US10576973B2 (en) Driving assistance device and driving assistance method
US20160260027A1 (en) Robot controlling apparatus and robot controlling method
EP2860664B1 (en) Face detection apparatus
JP6067623B2 (en) Travel control device
CN109878516A (en) The monitoring and adjustment in the gap between vehicle
KR20200110702A (en) Default preview area and gaze-based driver distraction detection
JPWO2019058720A1 (en) Information processing equipment, autonomous mobile devices, and methods, and programs
US9952598B2 (en) Mobile robot system and method for controlling mobile robot
EP3588372A1 (en) Controlling an autonomous vehicle based on passenger behavior
KR102044193B1 (en) System and Method for alarming collision of vehicle with support vector machine
KR100962593B1 (en) Method and apparatus for area based control of vacuum cleaner, and recording medium thereof
KR20140112824A (en) Leader-Follower Formation Device, Method and Mobile robot using Backstepping Method
JP2014059841A (en) Driving support device
US20180095462A1 (en) Method for Determining Road Surface Based on Vehicle Data
JP5796547B2 (en) Vehicle travel support device
CN109572690A (en) Controller of vehicle
KR102025491B1 (en) Augmented forward collision warning system and method based on prediction of vehicle braking distance
WO2019040080A1 (en) Detection of anomalies in the interior of an autonomous vehicle
WO2021029151A1 (en) Robot control device, method, and program
JP2003051015A (en) Environment recognition device
JP3829627B2 (en) External recognition device for vehicles
JP5336800B2 (en) Vehicle driving support device
JP3900111B2 (en) Vehicle external recognition device
KR20210001578A (en) Mobile body, management server, and operating method thereof
US20220397904A1 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20853293

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20853293

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP