WO2021044953A1 - Information processing system and information processing method - Google Patents

Information processing system and information processing method Download PDF

Info

Publication number
WO2021044953A1
WO2021044953A1 PCT/JP2020/032533 JP2020032533W WO2021044953A1 WO 2021044953 A1 WO2021044953 A1 WO 2021044953A1 JP 2020032533 W JP2020032533 W JP 2020032533W WO 2021044953 A1 WO2021044953 A1 WO 2021044953A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
robot device
information processing
processing system
unit
Prior art date
Application number
PCT/JP2020/032533
Other languages
French (fr)
Japanese (ja)
Inventor
嵩明 加藤
清和 宮澤
康史 林田
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2021044953A1 publication Critical patent/WO2021044953A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/06Safety devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop

Definitions

  • This disclosure relates to an information processing system and an information processing method.
  • a system that manages the safety of monitored objects such as children based on the information detected by the sensor is known. For example, a danger is alerted to a child to guide the child to a safe state, or a caregiver is made aware that the child is in a dangerous state by an alarm sound (Patent Document 1).
  • a plurality of distance image sensors provided on the ceiling of the space are used to predict the behavior of the child and determine whether the child is in a dangerous state after a predetermined time.
  • the information processing system of one form according to the present disclosure has a moving means, a first robot having an operating means for operating an object, and a moving means, and is subject to monitoring.
  • An information processing system including a second robot that tracks a monitoring target, the second robot transmits an image of the monitoring target captured by an image sensor to the first robot, and the first robot Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, a process for avoiding the occurrence of the danger is executed.
  • Embodiment 1-1 Outline of information processing according to the embodiment of the present disclosure 1-1-1. About danger 1-1-2. Danger judgment 1-1-3. Advantages of combining large and small robots 1-2. Configuration of Information Processing System According to Embodiment 1-3. Configuration of the first robot device according to the embodiment 1-4. Configuration of the Second Robot Device According to the Embodiment 1-5. Information processing procedure according to the embodiment 1-6. Conceptual diagram of the configuration of the information processing system 1-7. Processing example of information processing system 1-7-1. Example of search processing 1-7-2. Other examples of search processing 1-7-3. Example of accidental ingestion suppression treatment 1-7-4. Example of evacuation guidance processing 1-7-5. Example of plan update process 1-7-6. Example of rescue processing 1-8. Recognition example of monitoring target 1-9. Map update example 1-9-1. Classification of objects 1-9-2. Update of risk map 2. Other Embodiments 2-1. Other configuration examples 2-2. Others 3. Effect of this disclosure 4. Hardware configuration
  • FIG. 1 is a diagram showing an example of information processing according to the embodiment of the present disclosure.
  • the information processing according to the embodiment of the present disclosure is realized by the information processing system 1 (see FIG. 2) including the first robot device 100 which is the first robot and the second robot device 200 which is the second robot shown in FIG. To.
  • the first robot device 100 and the second robot device 200 included in the information processing system 1 execute information processing according to the embodiment.
  • the first robot device 100 has a moving means (moving unit 15) and an operating means (operating unit 16) for operating an object, and is a large robot (also referred to as a “large robot”) capable of operating a monitored object or an object.
  • the second robot device 200 has a moving means (moving unit 25), is smaller in size than the first robot device 100, and is a small robot that can move to a place (space) narrower than the first robot device 100. (Small robot).
  • a baby located in a space SP which is an indoor living environment such as a living room of a house is set as a monitored TG, and a danger caused by the behavior of the monitored TG is determined, and the danger is determined based on the determination result.
  • the case where the process for avoiding the occurrence is executed is shown.
  • the case where the monitoring target is a baby is shown as an example, but the monitoring target is not limited to the baby, and may be various targets such as children and pets larger than the baby.
  • the monitoring target may be any target as long as it is an autonomously moving (behavior) subject whose behavior is difficult to predict and whose behavior is difficult to be suppressed by language.
  • a plurality of objects are located in the space SP where the monitored TG is located.
  • a plurality of objects OB1 to OB7 including an object OB1 which is a sink, an object OB2 which is a table, and an object OB3 which is a sofa are located in the space SP.
  • the objects OB1 to OB7 are designated for the sake of explanation, but in addition to the objects OB1 to OB7, a large number of objects (objects) such as a television in the center of the right end are located in the space SP. Then, it is recognized by the information processing system 1.
  • a first robot device 100 which is a large robot
  • a second robot device 200 which is a small robot that tracks the monitored TG
  • the first robot device 100 and the second robot device 200 cooperate with each other to monitor the monitored TG and determine the danger caused by the behavior of the monitored TG.
  • the second robot device 200 detects by the image sensor 241 (step S11).
  • the second robot device 200 tracks the monitored TG and images the monitored TG.
  • the second robot device 200 captures the image IM1 by the image sensor 241.
  • the second robot device 200 recognizes a person included in the captured image IM1 and estimates the position of the face.
  • the second robot device 200 recognizes a person included in the image IM1 detected by the image sensor 241 by appropriately using various conventional techniques related to human recognition.
  • the second robot device 200 recognizes various objects such as a person included in the image IM1 detected by the image sensor 241 by appropriately using various techniques related to object recognition such as general object recognition.
  • the second robot device 200 recognizes a person (baby) included in the image IM1 as a monitoring target TG. If the second robot device 200 does not recognize a person, the first robot device 100 may do so.
  • the second robot device 200 estimates the position of the human face included in the image IM1 by appropriately using various conventional techniques related to face recognition.
  • the second robot device 200 estimates the position of the face FC of the monitored target TG, which is a person (baby) included in the image IM1. If the second robot device 200 does not estimate the position of the face, the first robot device 100 may do so.
  • the second robot device 200 moves the estimated face of the monitored TG to a position where it can be imaged by the moving unit 25, tracks the monitored TG at a position where the face of the monitored TG can be imaged, and tracks the monitored TG. To image.
  • the second robot device 200 transmits an image to the first robot device 100 (step S12).
  • the second robot device 200 transmits an image including the captured monitored TG to the first robot device 100.
  • the second robot device 200 transmits an image including the face FC of the monitored TG imaged to the first robot device 100.
  • the second robot device 200 transmits not only an image including the monitored TG but also various information such as an image obtained by capturing an object to the first robot device 100.
  • the first robot device 100 updates the risk map (step S13).
  • the first robot device 100 updates the risk map by appropriately using various information such as an image acquired from the second robot device 200 and an image captured by the image sensor 141.
  • the first robot device 100 updates the risk map MP1.
  • the first robot device 100 generates the risk map MP1.
  • the first robot device 100 generates a risk map MP1 including the recognized objects OB1 to OB7.
  • the first robot device 100 generates a risk map MP1 including the position of the first robot device 100 estimated by the self-position estimation technique, the position of the second robot device 200, and the position of the monitored TG. ..
  • the first robot device 100 has a SLAM (Simultaneous Localization and Mapping) function, and uses SLAM technology to estimate its own position and generate an environmental map such as a risk map MP1.
  • SLAM Simultaneous Localization and Mapping
  • the first robot device 100 may generate the risk map MP1 by appropriately using various information, not limited to the image.
  • the first robot device 100 causes the second robot device 200 to take an image while moving the second robot device 200 in the space SP, and uses the image taken by the second robot device 200 to create a risk map MP1. It may be generated.
  • the first robot device 100 may capture an image by the image sensor 141 while moving in the space SP, and generate the risk map MP1 using the captured image. Further, the first robot device 100 may generate a risk map MP1 by using a point cloud by a distance measuring sensor or the like. Further, the first robot device 100 recognizes an object based on the object information stored in the object information storage unit 122 (see FIG. 3) and the object information input by the administrator of the information processing system 1. You may.
  • the first robot device 100 may determine whether each object is a dangerous object for the monitored TG based on the object information.
  • the first robot device 100 may generate a danger determination condition described later based on the risk map MP1.
  • the first robot device 100 may use the danger determination conditions input by the administrator of the information processing system 1.
  • the first robot device 100 makes a danger determination (step S14).
  • the first robot device 100 determines the danger caused by the behavior of the monitored TG based on the image IM1 of the monitored TG received from the second robot device 200 and the risk map MP1.
  • the first robot device 100 determines various dangers caused by the behavior of the monitored TG, and details of this point will be described later.
  • the first robot device 100 executes a process for avoiding the occurrence of danger based on the determination result (step S15).
  • the first robot device 100 determines that there is a danger
  • the first robot device 100 executes a process for avoiding the occurrence of the danger.
  • a certain danger determination condition is satisfied
  • the first robot device 100 executes a process for avoiding the occurrence of the danger.
  • the first robot device 100 determines that there is a danger
  • the first robot device 100 itself executes a process for avoiding the occurrence of the danger, or the second robot device 200 is for avoiding the occurrence of the danger. Make them take action.
  • the first robot device 100 executes processes for avoiding various dangers according to the content of the dangers, and details of this point will be described later.
  • the first robot device 100 and the second robot device 200 cooperate with each other to monitor the monitored TG and suppress the occurrence of danger caused by the behavior of the monitored TG.
  • the information processing system 1 can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the danger caused by the behavior of the monitored target is a concept that includes the danger of falling on the monitored target itself due to the behavior of the monitored target and the danger of falling on the non-monitored target. That is, the danger caused by the behavior of the monitored target includes both the danger that extends to the monitored target due to the behavior of the monitored target and the danger that extends to the non-monitored target due to the behavior of the monitored target.
  • the dangers that reach the monitored object include that the monitored object hits an object, that the monitored object falls, that the monitored object is burned, that the monitored object accidentally swallows the object, that the monitored object is pinched by the object, etc. , Includes various dangers that fall on the monitored object itself. That is, the danger that affects the monitored object corresponds to the risk that the monitored object itself is harmed.
  • the dangers that extend to non-monitored objects include damage to objects due to the behavior of the monitored object, fires caused by the behavior of the monitored object, etc. Includes various dangers of falling. In other words, the danger that extends to something other than the monitoring target corresponds to the danger that harms something other than the monitoring target.
  • the information processing system 1 detects the behavior (precursor action) of the monitoring target in the previous stage that leads to the danger of harming the monitoring target itself or something other than the monitoring target, and when the monitoring target performs a precursory action, the precursory action. Judge that there is a risk of responding to the action.
  • the information processing system 1 may determine that there is a danger associated with the precursory behavior when the monitored target performs the precursory behavior by using the information associated with each danger and the precursory behavior.
  • the information processing system 1 determines that there is a danger associated with the precursory behavior, it predicts that the danger may occur and avoids the occurrence of the danger (avoidance processing) or the like.
  • Omen behaviors are various types of behaviors such as being located in a predetermined area, touching an object, grasping an object, and staring at an object if the behavior leads to the occurrence of danger. It may be.
  • the information processing system 1 determines the danger caused by the behavior of the monitored target by using the sensor information such as the risk map, the position of the monitored target, and the image obtained by capturing the monitored target. For example, the information processing system 1 predicts whether or not each danger may occur by using the information in which the information for identifying each danger is associated with the condition for determining the danger (danger determination condition).
  • each of the processes shown below may be performed by any device included in the information processing system 1, such as the first robot device 100 and the second robot device 200.
  • the first robot device 100 may mainly perform a risk determination
  • the second robot device 200 may perform a partial risk determination.
  • the risk of accidental ingestion of a temporary (simple) monitoring target is determined, and the first robot device 100 acquires it from the second robot device 200.
  • the risk of accidental ingestion of a temporary (full-scale) monitored object may be determined by using the image or other information obtained.
  • the information processing system 1 may determine the danger caused by the behavior of the monitored object on the condition that it is located in a dangerous area (danger area). When the position of the monitoring target is within the danger area or within a predetermined range (for example, 30 cm from the danger area) from the danger area, the information processing system 1 satisfies the danger determination condition, and a danger occurs. It may be determined that there is a possibility.
  • the information processing system 1 may determine the danger caused by the behavior of the monitored TG, using the area where the object OB2 and the object OB3 are arranged as a danger area.
  • the information processing system 1 determines the risk of the monitored TG falling, using the area where the object OB2 and the object OB3 that may fall after the monitored TG rises as a danger area. For example, when the information processing system 1 is located within a predetermined range from the position of the object OB3 or the position of the object OB3, the information processing system 1 determines that there is a risk of the monitored TG falling.
  • the information processing system 1 determines that there is a risk of the monitored TG falling, it executes a process for avoiding the risk of falling. For example, when the first robot device 100 determines that there is a danger of the monitored TG falling, the operation unit 16 executes a process of evacuating the monitored TG from the danger area. The first robot device 100 grips the monitored TG by the operation unit 16 and executes a process of carrying the monitored TG to a position away from the danger area. Further, the first robot device 100 instructs the second robot device 200 to cause the second robot device 200 to perform an action that attracts the attention of the monitored TG, and moves the monitored TG to a position away from the danger area. You may.
  • the information processing system 1 may determine the danger caused by the behavior of the monitored TG, using the area where the object OB7 is arranged as a danger area. For example, it is assumed that the object OB7 is an expensive article, and the guardian or the like of the monitored TG has designated the avoidance of damage.
  • the information processing system 1 determines the danger that extends to areas other than the monitored TG, with the area where the object OB7 is arranged as a danger area. For example, when the information processing system 1 is located within a predetermined range from the position of the object OB7 or the position of the object OB7, the information processing system 1 determines that there is a risk of damage to the object OB7 by the monitored TG.
  • the information processing system 1 determines that there is a risk of damage to the object OB7, the information processing system 1 executes a process for avoiding the risk of damage to the object OB7.
  • the operation unit 16 executes a process of evacuating the monitored TG from the danger area.
  • the first robot device 100 grips the monitored TG by the operation unit 16 and executes a process of carrying the monitored TG to a position away from the danger area (the position of the object OB7).
  • the first robot device 100 instructs the second robot device 200 to make the second robot device 200 perform an action that attracts the attention of the monitored TG, and moves the monitored TG from the danger area (position of the object OB7). It may be moved to a distant position.
  • the information processing system 1 may determine the danger caused by the behavior of the monitored object on the condition that the contact with the object is a danger determination condition.
  • the information processing system 1 considers that a danger may occur if the monitoring target holds the object or if the monitoring target contacts the object for a predetermined time (for example, 10 seconds or the like) or more, the danger determination condition is satisfied. You may judge.
  • the information processing system 1 may determine the danger caused by the behavior of the monitored TG according to the contact of the monitored TG with the outlet (not shown) in the space SP.
  • the information processing system 1 determines the risk of burns on the monitored TG on condition that the monitored TG comes into contact with the outlet, which may cause burns. For example, the information processing system 1 determines that there is a risk of burns to the monitored TG when the monitored TG comes into contact with the outlet.
  • the information processing system 1 may determine that there is a risk of burns to the monitored TG when the monitored TG grips the plug connected to the outlet. Further, when the information processing system 1 is located within a predetermined range from the position of the outlet, it may be determined that there is a risk of burns of the monitored TG.
  • the information processing system 1 determines that there is a risk of burns on the monitored TG, it executes a process for avoiding the risk of burns. For example, when the first robot device 100 determines that there is a risk of burns to the monitored TG, the operation unit 16 grips a contact portion such as a hand of the monitored TG and connects with an object such as an outlet of the monitored TG. Execute the process of releasing the contact.
  • the first robot device 100 may perform a process of grasping the monitored TG by the operation unit 16 and carrying the monitored TG to a position away from an object such as an outlet. Further, the first robot device 100 instructs the second robot device 200 to make the second robot device 200 perform an action that attracts the attention of the monitored TG so that the monitored TG is placed at a position away from an object such as an outlet. You may move it.
  • the information processing system 1 may determine the risk of accidental ingestion of the monitored object on the condition that the object is grasped and the object has a risk of accidental ingestion (accidental ingestion risk object). .. In this case, the information processing system 1 uses information indicating whether or not the object is an accidental ingestion danger object (accidental ingestion danger object information), and whether or not the object grasped by the monitored TG is accidental ingestion danger object information. To judge. For example, the information processing system 1 uses the accidental ingestion risk object information stored in the object information storage unit 122 (see FIG. 3) to determine whether or not the object held by the monitored TG is accidental ingestion risk object information. To do.
  • the information processing system 1 may determine the danger caused by the behavior of the monitored TG according to the gripping of the monitored TG of the cigarette (not shown) in the space SP.
  • the information processing system 1 determines the risk of accidental ingestion of the monitored TG on condition that the monitored TG may accidentally swallow a cigarette. For example, the information processing system 1 determines that there is a risk of accidental ingestion of the monitored TG when the monitored TG holds a cigarette.
  • the information processing system 1 may determine that there is a risk of accidental ingestion of the monitored TG when the monitored TG visually recognizes (stares) the cigarette for a certain period of time or longer. Further, the information processing system 1 may determine that there is a risk of accidental ingestion of the monitored TG when it is located within a predetermined range from the position of the cigarette.
  • the information processing system 1 determines that there is a risk of accidental ingestion of the monitored TG, it executes a process for avoiding the risk of accidental ingestion. For example, when the first robot device 100 determines that there is a risk of accidental ingestion of the monitored TG, the operation unit 16 grips the hand of the monitored TG and grasps an accidental ingestion risk object such as a cigarette of the monitored TG. Executes the process of canceling. The first robot device 100 grips an accidental ingestion danger object such as a cigarette by the operation unit 16 and picks up the accidental ingestion danger object from the monitored TG, that is, executes a process of releasing the grasp of the accidental ingestion danger object of the monitored TG. You may.
  • the information processing system 1 includes a large robot (first robot device 100) that grasps the entire environment and a small robot (first robot device 100) that tracks the monitoring target and sends information focusing on the face and hands of the monitoring target. Processing is performed by the combination of the second robot device 200).
  • the small robot can search for a monitoring target and inform the large robot of the position of the child.
  • the information processing system 1 can constantly monitor the hands of a child by a small robot.
  • the small robot since the small robot is small, its presence is weak even if it is nearby, and the possibility that the monitoring target cares about the small robot can be reduced. Further, in the information processing system 1, the small robot can move and monitor the place where the large robot cannot move. In addition, the information processing system 1 can move a child by a large robot. In addition, the information processing system 1 can perform large-scale calculations by a large robot. Further, in the information processing system 1, a large robot can take a bird's-eye view of the entire room.
  • the information processing system 1 can grasp the child's mouth and the entire environment at the same time. For example, the information processing system 1 can simultaneously grasp an image showing a child's mouth and a vase on a desk. Further, the information processing system 1 can wrap around the child under the chair by the small robot (second robot device 200) even when it is necessary to wrap around the child under the chair.
  • the small robot second robot device 200
  • the information processing system 1 can lower a monitored object that has climbed a step or a chair by a large robot (first robot device 100) having an operation unit 16.
  • the large robot (first robot device 100) and the small robot (second robot device 200) can be used. Can properly search for children.
  • the information processing system 1 is caused by the behavior of the monitored object by using a moving robot such as the first robot device 100 or the second robot device 200, which saves space and does not hinder the behavior of the parent. The occurrence of danger can be suppressed. Further, in the information processing system 1, the robot moves autonomously to create a risk map without providing a sensor in the structure of the space SP such as the ceiling, wall, and floor, so that the position of the monitoring target such as a baby is created. Can be grasped at the same time.
  • the information processing system 1 can suppress the occurrence of danger related to dangerous materials that cannot be dealt with by the baby guard.
  • the information processing system 1 can suppress the occurrence of danger related to dangerous objects such as moving objects and small gaps.
  • the information processing system 1 can update the risk map in real time to protect the monitored object such as a child from danger by using only the sensors possessed by the robots such as the first robot device 100 and the second robot device 200. ..
  • the information processing system 1 enables a person (adult, etc.) who monitors the monitored object to go out with peace of mind, leaving the monitored object (child, etc.).
  • FIG. 2 is a diagram showing a configuration example of an information processing system according to an embodiment.
  • the information processing system 1 includes a first robot device 100 and a second robot device 200.
  • the second robot device 200 and the first robot device 100 are communicably connected by wire or wirelessly via the network N.
  • the information processing system 1 shown in FIG. 2 may include a plurality of second robot devices 200 and a plurality of first robot devices 100.
  • the first robot device 100 and the second robot device 200 may communicate with each other by a wireless communication function such as Wi-Fi (registered trademark) (Wireless Fidelity) or Bluetooth (registered trademark).
  • Wi-Fi registered trademark
  • Wi-Fi Wireless Fidelity
  • Bluetooth registered trademark
  • the first robot device 100 is a robot having an operating means for operating a moving means and an object.
  • the first robot device 100 is an information processing device that performs various types of information processing.
  • the first robot device 100 communicates with the second robot device 200 via the network N, and gives an instruction to control the second robot device 200 based on the information collected by the second robot device 200 and various sensors. ..
  • the first robot device 100 is a large robot (large robot) capable of operating a monitored object or an object.
  • the first robot device 100 has a self-position estimation function.
  • the first robot device 100 has an object recognition function.
  • the first robot device 100 has a function of recognizing an object (object) such as a vase.
  • the first robot device 100 has a risk map creation / update function.
  • the first robot device 100 has a position mapping function of the second robot device 200, which is a small robot.
  • the first robot device 100 estimates the position of the second robot device 200 in the image sent from the second robot device 200 or the image taken by the first robot device 100. For example, the first robot device 100 estimates the position of the second robot device 200 based on the image acquired from the second robot device 200 and the second robot device 200 in the image taken by the image sensor 141.
  • the first robot device 100 has a function of creating a map by inputting an image acquired from the second robot device 200 into the first robot device 100.
  • the first robot device 100 has a function of evacuating the child from the prohibited area.
  • the first robot device 100 has a function of retracting a child from a prohibited area by an operation unit 16 such as an arm.
  • the first robot device 100 may have an output unit that outputs voice when the sound attracts the attention of the monitored object.
  • the first robot device 100 has a face recognition function, a person recognition function, and an object recognition function.
  • the second robot device 200 is a robot that has a moving means and an object and tracks a monitored object.
  • the second robot device 200 is a small robot (small robot) whose size is smaller than that of the first robot device 100, which is a large robot.
  • the second robot device 200 is an information processing device that performs various types of information processing.
  • the second robot device 200 communicates with the first robot device 100 via the network N, and transmits information to the first robot device 100.
  • the second robot device 200 has a face recognition function, a person recognition function, and an object recognition function.
  • the second robot device 200 has a function of recognizing an object at hand to be monitored.
  • the second robot device 200 detects an image.
  • the second robot device 200 detects an image for position mapping and map creation.
  • the second robot device 200 transmits an image to the first robot device 100.
  • the second robot device 200 tracks a monitoring target such as a child.
  • the second robot device 200 does not have to store the risk map in the storage unit 22.
  • the second robot device 200 wraps around the monitored object so as to image the front of the monitored object such as a child.
  • the second robot device 200 has an alert function and a movement function.
  • FIG. 3 is a diagram showing a configuration example of the first robot device according to the embodiment.
  • the first robot device 100 includes a communication unit 11, a storage unit 12, a control unit 13, a sensor unit 14, a moving unit 15, and an operating unit 16.
  • the first robot device 100 has a moving unit 15 which is a moving means.
  • the first robot device 100 has an operation unit 16 which is an operation means for operating an object.
  • the communication unit 11 is realized by, for example, a NIC (Network Interface Card), a communication circuit, or the like.
  • the communication unit 11 is connected to the network N (Internet, etc.) by wire or wirelessly, and transmits / receives information to / from other devices via the network N.
  • the storage unit 12 is realized by, for example, a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory (Flash Memory), or a storage device such as a hard disk or an optical disk.
  • the storage unit 12 includes a map information storage unit 121, an object information storage unit 122, and a danger determination information storage unit 123.
  • the map information storage unit 121 stores various information related to the map (map). For example, the map information storage unit 121 stores the risk level map. The map information storage unit 121 stores a risk map based on the information detected by the second robot device 200. For example, the map information storage unit 121 stores a risk map including the position of the monitoring target, the position of the first robot device 100, and the position of the second robot device 200. For example, the map information storage unit 121 stores a monitoring target, a risk map that maps the first robot device 100 and the second robot device 200. For example, the map information storage unit 121 stores a three-dimensional risk map.
  • the map information storage unit 121 stores information such as the risk map MP1.
  • the map information storage unit 121 stores the monitoring target TG, the first robot device 100, and the risk map MP1 that maps the second robot device 200.
  • the map information storage unit 121 may store a two-dimensional risk map.
  • the map information storage unit 121 may store the occupied grid map.
  • the object information storage unit 122 stores various information related to the object (object).
  • the object information storage unit 122 stores the information of the object in association with the information that identifies each object.
  • the object information storage unit 122 stores information related to the attributes of the object.
  • the object information storage unit 122 stores the size of the object.
  • the object information storage unit 122 stores information indicating the position of the object (object).
  • the object information storage unit 122 stores information indicating an area occupied by the object.
  • the object information storage unit 122 stores information indicating whether the position of the object can be changed.
  • the object information storage unit 122 stores information indicating the danger of an object (object).
  • the object information storage unit 122 stores information indicating whether it is dangerous for the monitored object to come into contact with the object. For example, the object information storage unit 122 stores an object that is dangerous to be touched by the monitored object as a contact danger object.
  • the object information storage unit 122 stores information indicating whether it is dangerous for the monitored object to grasp the object. For example, the object information storage unit 122 stores an object that is dangerous to be grasped by the monitored object as a contact danger object.
  • the object information storage unit 122 stores information indicating whether the monitored object has a risk of accidentally swallowing the object.
  • the object information storage unit 122 stores information indicating whether it is dangerous for the monitored object to put the object in the mouth.
  • the object information storage unit 122 stores an object that may be accidentally swallowed by the monitored object as a risk of accidental ingestion object.
  • the object information storage unit 122 stores information indicating the value of an object (object).
  • the object information storage unit 122 stores information indicating whether the object is expensive.
  • the object information storage unit 122 stores information indicating whether it is dangerous for the monitored object to approach the object.
  • the danger determination information storage unit 123 stores various information related to the danger determination.
  • the danger determination information storage unit 123 stores the conditions for determining the danger.
  • the danger determination information storage unit 123 stores the information for identifying each danger in association with the condition for determining the danger (danger determination condition).
  • the danger determination information storage unit 123 stores conditions for determining whether or not the information is located in a dangerous area (danger area).
  • the danger determination information storage unit 123 stores information indicating the danger area as a danger determination condition for intrusion of the danger area.
  • the danger determination information storage unit 123 stores that the monitoring target is located in the danger area as a danger determination condition for intrusion into the danger area.
  • the danger determination information storage unit 123 stores that the monitoring target is located within a predetermined range (for example, 50 cm from the danger area) from the danger area as a danger determination condition for intrusion of the danger area.
  • the danger determination information storage unit 123 stores the conditions for determining whether or not the monitored object comes into contact with an object whose contact is dangerous.
  • the danger determination information storage unit 123 stores that the monitored object comes into contact with an object whose contact is dangerous as a contact danger determination condition.
  • the danger determination information storage unit 123 stores that the monitored object comes into contact with an object and that the object is a contact danger object as a contact danger determination condition.
  • the danger determination information storage unit 123 stores the conditions for determining whether or not the monitored object grips an object whose grip is dangerous.
  • the danger determination information storage unit 123 stores that the monitored object grips an object whose grip is dangerous as a gripping danger determination condition.
  • the danger determination information storage unit 123 stores that the monitored object is gripped by an object and that the object is a gripping danger object as a gripping danger determination condition.
  • the danger determination information storage unit 123 stores conditions for determining whether or not the monitored object accidentally swallows an object.
  • the danger determination information storage unit 123 stores as a risk determination condition for accidental ingestion that the monitored object holds an object in the hand and the object has a risk of accidental ingestion.
  • the danger determination information storage unit 123 stores that the monitored object holds an object in the hand and that the object is an accidental ingestion risk object as a risk determination condition for accidental ingestion.
  • the storage unit 12 is not limited to the map information storage unit 121, the object information storage unit 122, and the danger determination information storage unit 123, and various types of information are stored.
  • the storage unit 12 may store various information related to the operation unit 16.
  • the storage unit 12 may store information indicating the number of operation units 16 and the installation position of the operation unit 16.
  • the storage unit 12 may store various types of information used for identifying (estimating) an object.
  • a program for example, an information processing program according to the present disclosure
  • a program stored in the first robot device 100 by a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or the like is a RAM (Random Access). It is realized by executing Memory) etc. as a work area.
  • the control unit 13 may be realized by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
  • control unit 13 includes an acquisition unit 131, a recognition unit 132, a generation unit 133, an estimation unit 134, a determination unit 135, a planning unit 136, and an execution unit 137. Realize or execute the information processing functions and actions described below.
  • the internal configuration of the control unit 13 is not limited to the configuration shown in FIG. 3, and may be another configuration as long as it is a configuration for performing information processing described later.
  • the acquisition unit 131 acquires various information.
  • the acquisition unit 131 acquires various information from an external information processing device.
  • the acquisition unit 131 receives various information from an external information processing device.
  • the acquisition unit 131 receives various information from the second robot device 200.
  • the acquisition unit 131 acquires various information from the storage unit 12.
  • the acquisition unit 131 acquires various information from the map information storage unit 121, the object information storage unit 122, and the danger determination information storage unit 123.
  • the acquisition unit 131 acquires information from the recognition unit 132, the generation unit 133, the estimation unit 134, the determination unit 135, and the planning unit 136.
  • the acquisition unit 131 stores the acquired information in the storage unit 12.
  • the acquisition unit 131 acquires the sensor information detected by the sensor unit 14.
  • the acquisition unit 131 acquires the sensor information (image information) detected by the image sensor 141.
  • the acquisition unit 131 acquires the image information (image) captured by the image sensor 141.
  • the acquisition unit 131 receives the image to be monitored from the second robot device 200.
  • the acquisition unit 131 receives information indicating that the alert has been issued from the second robot device 200.
  • the recognition unit 132 recognizes various types of information.
  • the recognition unit 132 analyzes various information.
  • the recognition unit 132 analyzes the image information.
  • the recognition unit 132 analyzes various information from the image information based on the information from the external information processing device and the information stored in the storage unit 12.
  • the recognition unit 132 identifies various types of information from the image information.
  • the recognition unit 132 extracts various information from the image information.
  • the recognition unit 132 performs recognition based on the analysis result.
  • the recognition unit 132 recognizes various information based on the analysis result.
  • the recognition unit 132 performs analysis processing related to the image.
  • the recognition unit 132 performs various processes related to image processing.
  • the recognition unit 132 processes the image information (image) acquired by the acquisition unit 131.
  • the recognition unit 132 processes the image information (image) captured by the second robot device 200.
  • the recognition unit 132 processes the image information (image) captured by the image sensor 141.
  • the recognition unit 132 performs processing on the image by appropriately using a technique related to image processing.
  • the recognition unit 132 recognizes the object.
  • the recognition unit 132 recognizes each object included in the image detected by the image sensor 141 and the image acquired from the second robot device 200 by appropriately using various techniques related to object recognition such as general object recognition.
  • the recognition unit 132 recognizes a person in the image.
  • the recognition unit 132 recognizes a person's face in the image.
  • the recognition unit 132 recognizes the face to be monitored by the face recognition function.
  • the recognition unit 132 recognizes the monitored target as a person by the person recognition function.
  • Generation unit 133 generates various information.
  • the generation unit 133 generates various information based on the information from the external information processing device and the information stored in the storage unit 12.
  • the generation unit 133 generates various information based on the information from the second robot device 200.
  • the generation unit 133 generates various information based on the information stored in the storage unit 12.
  • the generation unit 133 generates various information based on the various information acquired by the acquisition unit 131.
  • the generation unit 133 generates various information based on the various information recognized by the recognition unit 132.
  • the generation unit 133 generates various information based on the various information estimated by the estimation unit 134.
  • the generation unit 133 generates various information based on various information determined by the determination unit 135.
  • Generation unit 133 generates various classification information.
  • the generation unit 133 performs various classifications.
  • the generation unit 133 classifies various types of information.
  • the generation unit 133 performs the classification process based on the information acquired by the acquisition unit 131.
  • the generation unit 133 classifies the information acquired by the acquisition unit 131.
  • the generation unit 133 performs the classification process based on the information stored in the storage unit 12.
  • the generation unit 133 performs various classifications based on the information acquired by the acquisition unit 131.
  • the generation unit 133 performs various classifications using various sensor information detected by the sensor unit 14.
  • the generation unit 133 performs various classifications using the sensor information detected by the image sensor 141.
  • Generation unit 133 generates a risk map related to danger.
  • the generation unit 133 maps the position of the second robot device 200.
  • the estimation unit 134 estimates various information.
  • the estimation unit 134 estimates various types of information based on the information acquired from the external information processing device.
  • the estimation unit 134 estimates various types of information based on the information stored in the storage unit 12.
  • the estimation unit 134 estimates various information based on the result of the recognition process by the recognition unit 132.
  • the estimation unit 134 predicts various types of information.
  • the estimation unit 134 predicts various types of information based on the information acquired from the external information processing device.
  • the estimation unit 134 predicts various types of information based on the information stored in the storage unit 12.
  • the estimation unit 134 predicts various information based on the result of the recognition process by the recognition unit 132.
  • the estimation unit 134 performs various estimations based on the information acquired by the acquisition unit 131.
  • the estimation unit 134 performs various estimations using various sensor information detected by the sensor unit 14.
  • the estimation unit 134 performs various estimations using the sensor information detected by the image sensor 141.
  • the estimation unit 134 performs various estimations using the sensor information detected by the second robot device 200.
  • the estimation unit 134 makes various predictions based on the information acquired by the acquisition unit 131.
  • the estimation unit 134 makes various predictions using various sensor information detected by the sensor unit 14.
  • the estimation unit 134 makes various predictions using the sensor information detected by the image sensor 141.
  • the estimation unit 134 makes various predictions using the sensor information detected by the second robot device 200.
  • the estimation unit 134 performs estimation processing based on the image information acquired by the acquisition unit 131.
  • the estimation unit 134 performs estimation processing based on the image information received from the second robot device 200.
  • the estimation unit 134 estimates the self-position. Further, the estimation unit 134 estimates the position of the second robot device 200.
  • the estimation unit 134 estimates the position of the second robot device 200 based on the image acquired from the second robot device 200.
  • the determination unit 135 determines various information.
  • the determination unit 135 determines various information.
  • the determination unit 135 specifies various types of information.
  • the determination unit 135 determines various types of information based on the information acquired from the external information processing device.
  • the determination unit 135 determines various types of information based on the information stored in the storage unit 12.
  • the determination unit 135 makes various determinations based on the information acquired by the acquisition unit 131.
  • the determination unit 135 makes various determinations using various sensor information detected by the sensor unit 14.
  • the determination unit 135 makes various determinations using the sensor information detected by the image sensor 141.
  • the determination unit 135 determines various information based on the result of the recognition process by the recognition unit 132.
  • the determination unit 135 determines various information based on the result of the estimation process by the estimation unit 134.
  • the determination unit 135 determines various information based on the result of the prediction process by the estimation unit 134.
  • the determination unit 135 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target received from the second robot device 200 by the acquisition unit 131.
  • the determination unit 135 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target that is a child or a pet.
  • the determination unit 135 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target located in the indoor living environment.
  • Judgment unit 135 determines the danger to the monitored target due to the behavior of the monitored target.
  • the determination unit 135 determines the danger to the non-monitored target due to the behavior of the monitored target.
  • the determination unit 135 determines the danger of reaching an object other than the monitored object due to the behavior of the monitored object.
  • the determination unit 135 determines the danger caused by the action on the object to be monitored.
  • the determination unit 135 determines the danger caused by the contact with the object to be monitored.
  • the determination unit 135 determines the danger caused by gripping the object to be monitored.
  • the determination unit 135 determines the danger caused by the movement of the position to be monitored.
  • the determination unit 135 determines the danger caused by the intrusion of the monitored object into the area where the occurrence of the danger is predicted.
  • the determination unit 135 determines the danger caused by the behavior of the monitoring target based on the images of the monitoring target received from each of the plurality of second robot devices 200.
  • Planning department 136 makes various plans.
  • the planning unit 136 generates various information regarding the action plan.
  • the planning unit 136 makes various plans based on the information acquired by the acquisition unit 131.
  • the planning unit 136 makes various plans based on the estimation result by the estimation unit 134.
  • the planning unit 136 makes various plans based on the prediction result by the estimation unit 134.
  • the planning unit 136 makes various plans based on the determination result by the determination unit 135.
  • the planning unit 136 makes an action plan by using various techniques related to the action plan.
  • the planning unit 136 makes an action plan for the second robot device 200.
  • Execution unit 137 executes various processes.
  • the execution unit 137 executes various processes based on information from an external information processing device.
  • the execution unit 137 executes various processes based on the information stored in the storage unit 12.
  • the execution unit 137 executes various processes based on the information stored in the map information storage unit 121, the object information storage unit 122, and the danger determination information storage unit 123.
  • the execution unit 137 executes various processes based on the information acquired by the acquisition unit 131.
  • the execution unit 137 functions as an operation control unit that controls the operation of the operation unit 16.
  • Execution unit 137 executes various processes based on the estimation result by the estimation unit 134.
  • the execution unit 137 executes various processes based on the prediction result by the estimation unit 134.
  • the execution unit 137 executes various processes based on the determination result by the determination unit 135.
  • the execution unit 137 executes various processes based on the action plan by the planning unit 136.
  • the execution unit 137 controls the moving unit 15 to execute the action corresponding to the action plan based on the information of the action plan generated by the planning unit 136.
  • the execution unit 137 executes the movement process of the first robot device 100 according to the action plan under the control of the movement unit 15 based on the information of the action plan.
  • the execution unit 137 controls the operation unit 16 based on the information of the action plan generated by the planning unit 136 to execute the action corresponding to the action plan.
  • the execution unit 137 executes the operation processing of the object by the first robot device 100 according to the action plan under the control of the operation unit 16 based on the information of the action plan.
  • the execution unit 137 transmits various information to the second robot device 200.
  • the execution unit 137 controls the behavior of the second robot device 200 by transmitting various information to the second robot device 200.
  • the execution unit 137 transmits the information of the action plan to the second robot device 200, and executes the action processing of the second robot device 200 according to the action plan.
  • the execution unit 137 causes the second robot device 200 to control the moving unit 25 based on the action plan information, and the second robot device 200 follows the action plan. Execute 200 movement processes.
  • the execution unit 137 executes a process for avoiding the occurrence of danger based on the determination result by the determination unit 135.
  • the determination unit 135 determines that the occurrence of danger due to the behavior of the monitored object is predicted
  • the execution unit 137 executes the operation of the object by the operation unit 16.
  • the execution unit 137 executes an operation on the monitoring target by the operation unit 16.
  • the execution unit 137 executes an operation of moving the monitoring target by the operation unit 16.
  • the execution unit 137 executes an operation of evacuating the monitored object from the area where the occurrence of danger is predicted by the operation unit 16.
  • the execution unit 137 executes an operation of suppressing the behavior to be monitored by the operation unit 16.
  • the execution unit 137 executes an operation of grasping the arm to be monitored by the operation unit 16.
  • the execution unit 137 instructs the second robot device 200 to take an action to avoid the occurrence of the danger.
  • the execution unit 137 instructs the second robot device 200 to take an action to draw the attention of the monitored object.
  • the execution unit 137 instructs the second robot device 200 to output voice.
  • the execution unit 137 instructs the second robot device 200 to be located within the field of view to be monitored.
  • the sensor unit 14 detects predetermined information.
  • the sensor unit 14 has an image sensor 141 as an image pickup means for capturing an image.
  • the image sensor 141 detects the image information and functions as the visual sense of the first robot device 100.
  • the image sensor 141 is provided on the head of the first robot device 100.
  • the image sensor 141 captures image information.
  • the sensor unit 14 is not limited to the image sensor 141, and may have various sensors.
  • the sensor unit 14 may have a proximity sensor.
  • the sensor unit 14 may have a range finder such as a LiDAR (Light Detection and Ringing, Laser Imaging Detection and Ringing), a ToF (Time of Flight) sensor, or a stereo camera.
  • the sensor unit 14 may have a sensor (position sensor) that detects the position information of the first robot device 100 such as a GPS (Global Positioning System) sensor.
  • the sensor unit 14 may have a force sensor that detects a force and functions as a tactile sense of the first robot device 100.
  • the sensor unit 14 may have a force sensor provided at the tip (holding unit) of the operation unit 16.
  • the sensor unit 14 may have a force sensor that detects contact with an object by the operation unit 16.
  • the sensor unit 14 is not limited to the above, and may have various sensors.
  • the sensor unit 14 may have various sensors such as an acceleration sensor and a gyro sensor. Further, the sensors that detect the above-mentioned various information in the sensor unit 14 may be common sensors, or may be realized by different sensors.
  • the moving unit 15 has a function of driving the physical configuration of the first robot device 100.
  • the moving unit 15 has a function for moving the position of the first robot device 100.
  • the moving unit 15 is, for example, an actuator.
  • the moving unit 15 may have any configuration as long as the first robot device 100 can realize a desired operation.
  • the moving unit 15 may have any configuration as long as the position of the first robot device 100 can be moved.
  • the moving unit 15 drives the caterpillars and tires.
  • the moving unit 15 moves the first robot device 100 and changes the position of the first robot device 100 by driving the moving mechanism of the first robot device 100 in response to an instruction from the execution unit 137.
  • the first robot device 100 has an operation unit 16.
  • the operation unit 16 is a unit corresponding to a human “hand (arm)” and realizes a function for the first robot device 100 to act on another object.
  • the first robot device 100 has two operating units 16 as two hands.
  • the operation units 16 may be provided at various positions depending on the number of the operation units 16 and the shape of the first robot device 100.
  • the operation unit 16 is driven according to the processing by the execution unit 137.
  • the operation unit 16 is a manipulator that operates an object.
  • the operating unit 16 may be a manipulator having an arm and an end effector.
  • the operation unit 16 operates on the object.
  • the operation unit 16 operates the monitored object.
  • the operation unit 16 performs an operation of moving the position of the monitoring target or suppressing the behavior of the monitoring target.
  • the operation unit 16 operates on the object.
  • the operation unit 16 performs an operation of grasping the object and moving the position of the object.
  • the operation unit 16 has a holding unit that holds an object such as an end effector or a robot hand, and a driving unit that drives the holding unit such as an actuator.
  • the holding unit of the operation unit 16 may be of any method as long as a desired function can be realized, such as a gripper, a multi-finger hand, a jamming hand, a suction hand, and a soft hand.
  • the holding portion of the operating portion 16 may be realized by any configuration as long as it can hold the object, may be a gripping portion that grips the object, or is a suction portion that sucks and holds the object. There may be.
  • FIG. 4 is a diagram showing a configuration example of the second robot device according to the embodiment.
  • the second robot device 200 includes a communication unit 21, a storage unit 22, a control unit 23, a sensor unit 24, a moving unit 25, and an output unit 27.
  • the second robot device 200 has a moving unit 25 which is a moving means.
  • the second robot device 200 has an output unit 27 which is an output means for outputting information in a predetermined mode.
  • the communication unit 21 is realized by, for example, a NIC or a communication circuit.
  • the communication unit 21 is connected to the network N (Internet, etc.) by wire or wirelessly, and transmits / receives information to / from other devices via the network N.
  • the storage unit 22 is realized by, for example, a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk.
  • the storage unit 22 stores various information.
  • the storage unit 22 stores various information necessary for monitoring the monitoring target.
  • the storage unit 22 stores various information necessary for tracking the monitoring target.
  • the storage unit 22 stores various types of information received from the first robot device 100.
  • the storage unit 22 stores information indicating an action plan.
  • the storage unit 22 may store the risk level map.
  • the storage unit 22 may store the image captured by the image sensor 241.
  • the storage unit 22 is not limited to the above, and various types of information are stored.
  • the storage unit 22 may store various information about the operation unit.
  • the storage unit 22 may store information indicating the number of operation units and the installation position of the operation units.
  • the storage unit 22 may store various types of information used for identifying (estimating) an object.
  • the control unit 23 is realized by, for example, a CPU, an MPU, or the like executing a program stored inside the second robot device 200 (for example, an information processing program according to the present disclosure) using a RAM or the like as a work area. .. Further, the control unit 23 may be realized by an integrated circuit such as an ASIC or FPGA.
  • control unit 23 includes an acquisition unit 231, a recognition unit 232, an estimation unit 233, a determination unit 234, a transmission unit 235, and an execution unit 236, and the information described below. Realize or execute the function or action of processing.
  • the internal configuration of the control unit 23 is not limited to the configuration shown in FIG. 4, and may be another configuration as long as it is a configuration for performing information processing described later.
  • the acquisition unit 231 acquires various information.
  • the acquisition unit 231 acquires various information from an external information processing device.
  • the acquisition unit 231 receives various information from an external information processing device.
  • the acquisition unit 231 receives various information from the first robot device 100.
  • the acquisition unit 231 acquires various information from the storage unit 22.
  • the acquisition unit 231 acquires information from the recognition unit 232, the estimation unit 233, and the determination unit 234.
  • the acquisition unit 231 stores the acquired information in the storage unit 22.
  • the acquisition unit 231 acquires the sensor information detected by the sensor unit 24.
  • the acquisition unit 231 acquires the sensor information (image information) detected by the image sensor 241.
  • the acquisition unit 231 acquires the image information (image) captured by the image sensor 241.
  • the acquisition unit 231 receives the action plan from the first robot device 100.
  • the recognition unit 232 recognizes various types of information.
  • the recognition unit 232 analyzes various information.
  • the recognition unit 232 analyzes the image information.
  • the recognition unit 232 analyzes various information from the image information based on the information from the external information processing device and the information stored in the storage unit 22.
  • the recognition unit 232 identifies various types of information from the image information.
  • the recognition unit 232 extracts various information from the image information.
  • the recognition unit 232 performs recognition based on the analysis result.
  • the recognition unit 232 recognizes various information based on the analysis result.
  • the recognition unit 232 performs analysis processing related to the image.
  • the recognition unit 232 performs various processes related to image processing.
  • the recognition unit 232 processes the image information (image) acquired by the acquisition unit 231.
  • the recognition unit 232 processes the image information (image) captured by the second robot device 200.
  • the recognition unit 232 processes the image information (image) captured by the image sensor 241.
  • the recognition unit 232 performs processing on the image by appropriately using a technique related to image processing.
  • the recognition unit 232 recognizes the object.
  • the recognition unit 232 recognizes each object included in the image detected by the image sensor 241 by appropriately using various techniques related to object recognition such as general object recognition.
  • the recognition unit 232 recognizes the person in the image.
  • the recognition unit 232 recognizes a person's face in the image.
  • the recognition unit 232 recognizes the face to be monitored by the face recognition function.
  • the recognition unit 232 recognizes the monitored object as a person by the person recognition function.
  • the process (recognition process) performed by the recognition unit 232 may be a simpler process than the process performed by the recognition unit 132 of the first robot device 100. Further, when the second robot device 200 does not perform the recognition process and is controlled by the instruction from the first robot device 100, the second robot device 200 does not have to have the recognition unit 232.
  • the estimation unit 233 estimates various types of information.
  • the estimation unit 233 estimates various types of information based on the information acquired from the external information processing device.
  • the estimation unit 233 estimates various types of information based on the information stored in the storage unit 22.
  • the estimation unit 233 estimates various information based on the result of the recognition process by the recognition unit 232.
  • the estimation unit 233 predicts various types of information.
  • the estimation unit 233 predicts various types of information based on the information acquired from the external information processing device.
  • the estimation unit 233 predicts various types of information based on the information stored in the storage unit 22.
  • the estimation unit 233 predicts various information based on the result of the recognition process by the recognition unit 232.
  • the estimation unit 233 performs various estimations based on the information acquired by the acquisition unit 231.
  • the estimation unit 233 performs various estimations using various sensor information detected by the sensor unit 24.
  • the estimation unit 233 performs various estimations using the sensor information detected by the image sensor 241.
  • the estimation unit 233 performs various estimations using the sensor information detected by the second robot device 200.
  • the estimation unit 233 makes various predictions based on the information acquired by the acquisition unit 231.
  • the estimation unit 233 makes various predictions using various sensor information detected by the sensor unit 24.
  • the estimation unit 233 makes various predictions using the sensor information detected by the image sensor 241.
  • the estimation unit 233 makes various predictions using the sensor information detected by the second robot device 200.
  • the estimation unit 233 performs estimation processing based on the image information acquired by the acquisition unit 231.
  • the estimation unit 233 performs estimation processing based on the image information received from the second robot device 200.
  • the process (estimation process) performed by the estimation unit 233 may be a simpler process than the process performed by the estimation unit 134 of the first robot device 100. Further, when the second robot device 200 does not perform the estimation process and is controlled by the instruction from the first robot device 100, the second robot device 200 does not have to have the estimation unit 233.
  • the determination unit 234 determines various information.
  • the determination unit 234 determines various information.
  • the determination unit 234 specifies various types of information.
  • the determination unit 234 determines various types of information based on the information acquired from the external information processing device.
  • the determination unit 234 determines various types of information based on the information stored in the storage unit 22.
  • the determination unit 234 makes various determinations based on the information acquired by the acquisition unit 231.
  • the determination unit 234 makes various determinations using various sensor information detected by the sensor unit 24.
  • the determination unit 234 makes various determinations using the sensor information detected by the image sensor 241.
  • the determination unit 234 determines various information based on the result of the recognition process by the recognition unit 232.
  • the determination unit 234 determines various information based on the result of the estimation process by the estimation unit 233.
  • the determination unit 234 determines various information based on the result of the prediction process by the estimation unit 233.
  • the determination unit 234 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target received from the second robot device 200 by the acquisition unit 231.
  • the determination unit 234 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target that is a child or a pet.
  • the determination unit 234 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target located in the indoor living environment.
  • Judgment unit 234 determines the danger to the monitored target due to the behavior of the monitored target.
  • the determination unit 234 determines the danger to the non-monitored target due to the behavior of the monitored target.
  • the determination unit 234 determines the danger of reaching an object other than the monitored object due to the behavior of the monitored object.
  • the determination unit 234 determines the danger caused by the action on the object to be monitored.
  • the determination unit 234 determines the danger caused by the contact with the object to be monitored.
  • the determination unit 234 determines the danger caused by gripping the object to be monitored.
  • the determination unit 234 determines the danger caused by the movement of the position to be monitored.
  • the determination unit 234 determines the danger caused by the intrusion of the monitored object into the area where the danger is predicted to occur.
  • the determination unit 234 determines the danger caused by the behavior of the monitoring target based on the images of the monitoring target received from each of the plurality of second robot devices 200.
  • the process (determination process) performed by the determination unit 234 may be a simpler process than the process performed by the determination unit 135 of the first robot device 100. Further, when the second robot device 200 does not perform the determination process and is controlled by the instruction from the first robot device 100, the second robot device 200 does not have to have the determination unit 234.
  • the transmission unit 235 transmits various information to an external information processing device.
  • the transmission unit 235 transmits various information to an external information processing device.
  • the transmission unit 235 transmits various information to the first robot device 100.
  • the transmission unit 235 provides the information stored in the storage unit 22.
  • the transmission unit 235 transmits the information stored in the storage unit 22.
  • the transmission unit 235 transmits the sensor information detected by the sensor unit 24.
  • the transmission unit 235 transmits the sensor information (image information) detected by the image sensor 241.
  • the transmission unit 235 transmits the image information (image) captured by the image sensor 241.
  • the transmission unit 235 transmits an image to the first robot device 100.
  • the transmission unit 235 transmits the image of the monitoring target captured by the image sensor 241 to the first robot device 100.
  • the transmission unit 235 transmits an image of a monitoring target, which is a child or a pet, captured by the image sensor 241 to the first robot device 100.
  • the transmission unit 235 transmits the image of the monitoring target located in the indoor living environment captured by the image sensor 241 to the first robot device 100.
  • the transmission unit 235 transfers the information indicating that the alert has been issued to the first robot device 100.
  • Execution unit 236 executes various processes.
  • the execution unit 236 executes various processes based on information from an external information processing device.
  • the execution unit 236 executes various processes based on the information stored in the storage unit 22.
  • the execution unit 236 executes various processes based on the information stored in the map information storage unit 221 and the object information storage unit 222 and the danger determination information storage unit 223.
  • the execution unit 236 executes various processes based on the information acquired by the acquisition unit 231.
  • the execution unit 236 functions as an operation control unit that controls the operation of the operation unit 26.
  • the execution unit 236 executes various processes based on the estimation result by the estimation unit 233.
  • the execution unit 236 executes various processes based on the prediction result by the estimation unit 233.
  • the execution unit 236 executes various processes based on the determination result by the determination unit 234.
  • the execution unit 236 executes various processes based on the action plan acquired from the first robot device 100.
  • the execution unit 236 controls the moving unit 25 to execute the action corresponding to the action plan based on the action plan information acquired from the first robot device 100.
  • the execution unit 236 executes the movement process of the second robot device 200 according to the action plan under the control of the movement unit 25 based on the information of the action plan.
  • the execution unit 236 controls the operation unit 26 to execute the action corresponding to the action plan based on the action plan information acquired from the first robot device 100.
  • the execution unit 236 executes a process for avoiding the occurrence of danger in response to an instruction from the first robot device 100.
  • the execution unit 236 executes an action for directing the attention of the monitored object in response to an instruction from the first robot device 100.
  • the execution unit 236 outputs voice to the second robot device 200 by the output unit 27 in response to an instruction from the first robot device 100.
  • the execution unit 236 executes the movement so as to be located in the field of view to be monitored in response to the instruction from the first robot device 100.
  • the sensor unit 24 detects predetermined information.
  • the sensor unit 24 has an image sensor 241 as an image pickup means for capturing an image.
  • the image sensor 241 detects the image information and functions as the visual sense of the second robot device 200.
  • the image sensor 241 is provided in the front portion of the second robot device 200.
  • the image sensor 241 captures image information.
  • the image sensor 241 detects (images) an image including the monitored TG.
  • the sensor unit 24 is not limited to the image sensor 241 and may have various sensors.
  • the sensor unit 24 may have a proximity sensor.
  • the sensor unit 24 may have a distance measuring sensor such as a LiDAR, a ToF sensor, or a stereo camera.
  • the sensor unit 24 may have a sensor (position sensor) that detects the position information of the second robot device 200 such as a GPS sensor.
  • the sensor unit 24 may have a force sensor that detects a force and functions as a tactile sense of the second robot device 200.
  • the sensor unit 24 may have a force sensor provided at the tip (holding unit) of the operation unit 26.
  • the sensor unit 24 may have a force sensor that detects contact with an object by the operation unit 26.
  • the sensor unit 24 is not limited to the above, and may have various sensors.
  • the sensor unit 24 may have various sensors such as an acceleration sensor and a gyro sensor. Further, the sensors that detect the above-mentioned various information in the sensor unit 24 may be common sensors, or may be
  • the moving unit 25 has a function of driving the physical configuration of the second robot device 200.
  • the moving unit 25 has a function for moving the position of the second robot device 200.
  • the moving unit 25 is, for example, an actuator.
  • the moving unit 25 may have any configuration as long as the second robot device 200 can realize a desired operation.
  • the moving unit 25 may have any configuration as long as the position of the second robot device 200 can be moved.
  • the moving unit 25 drives the caterpillars and tires.
  • the moving unit 25 may have a configuration (for example, a rotary wing machine or the like) that realizes the movement of the second robot device 200 in a floating state.
  • the moving unit 25 moves the second robot device 200 and changes the position of the second robot device 200 by driving the moving mechanism of the second robot device 200 in response to an instruction from the execution unit 236.
  • the output unit 27 performs various outputs.
  • the output unit 27 outputs various information.
  • the output unit 27 has a function of outputting audio.
  • the output unit 27 has a speaker that outputs sound.
  • the output unit 27 may have a function of outputting in various modes such as light.
  • the output unit 27 has an output function capable of attracting the attention of the monitored object. If the second robot device 200 does not output to attract the attention of the monitored object, the second robot device 200 does not have to have the output unit 27.
  • FIG. 5 is a flowchart showing an information processing procedure according to the embodiment.
  • the information processing system 1 transmits the image of the monitoring target captured by the second robot by the image sensor to the first robot (step S101).
  • the second robot device 200 transmits the image of the monitoring target captured by the image sensor 241 to the first robot device 100.
  • the information processing system 1 determines the danger caused by the behavior of the monitored target based on the image of the monitored target received by the first robot from the second robot (step S102). For example, the first robot device 100 determines the danger caused by the behavior of the monitored target based on the image of the monitored target received from the second robot device 200.
  • the first robot executes a process for avoiding the occurrence of danger based on the determination result (step S103).
  • the first robot device 100 executes a process for avoiding the occurrence of danger based on the determination result.
  • FIG. 6 is a diagram showing an example of a conceptual diagram of the configuration of the information processing system.
  • the first robot device 100 which is a large robot, has a photographing sensor, an object recognition function, a self-position estimation function, a risk map estimation function, an external communication function, a child evacuation function, and a moving body.
  • the photographing sensor corresponds to the image sensor 141.
  • the object recognition function corresponds to the recognition unit 132.
  • the self-position estimation function corresponds to the estimation unit 134.
  • the risk map estimation function corresponds to the generation unit 133 and the estimation unit 134.
  • the external communication function corresponds to the communication unit 11, the acquisition unit 131, and the execution unit 137.
  • the child evacuation function corresponds to the execution unit 137 and the operation unit 16.
  • the moving body corresponds to the moving unit 15.
  • the second robot device 200 which is a small robot, has a photographing sensor, an object recognition function, a human body recognition function (tracking function), an external communication function, and a mobile body.
  • the photographing sensor corresponds to the image sensor 241.
  • the object recognition function corresponds to the recognition unit 232.
  • the human body recognition function corresponds to the recognition unit 232 and the execution unit 236.
  • the external communication function corresponds to the communication unit 21, the acquisition unit 231 and the transmission unit 235.
  • the moving body corresponds to the moving unit 25.
  • the information processing system 1 offloads a large-scale processing amount to a large robot (first robot device 100) so that the small robot (second robot device 200) has only the minimum functions. To do. As a result, the information processing system 1 can save costs by reducing the cost of the small robot and the like.
  • FIG. 7 is a diagram showing an example of search processing in an information processing system. Specifically, FIG. 7 shows an example of search processing in the information processing system 1. The step numbers shown in FIG. 7 are for explaining the processing (reference numerals) and do not indicate the order of the processing.
  • the second robot device 200 captures a camera image (step S201).
  • the second robot device 200 captures an image by the image sensor 241.
  • the second robot device 200 recognizes a face or a person (step S202). For example, the second robot device 200 recognizes a face or a person included in an image.
  • the second robot device 200 recognizes the object (step S203). For example, the second robot device 200 recognizes an object included in the image.
  • the second robot device 200 makes a determination (step S204). For example, the second robot device 200 determines whether or not a person is included in the image.
  • the second robot device 200 determines that the image includes a person (step S204: Yes)
  • the second robot device 200 estimates the position of the face (step S205). For example, the second robot device 200 estimates the position of a person's face included in the image. Then, the second robot device 200 performs the process of step S206.
  • step S204 When it is determined that the image does not include a person (step S204: No), the second robot device 200 performs the process of step S206 without performing the process of step S205.
  • the second robot device 200 performs self-position control (step S206). For example, the second robot device 200 performs control based on the estimated self-position.
  • the second robot device 200 performs the process of step S201 and transfers information (step S207).
  • the second robot device 200 repeatedly takes a camera image and transfers information to the first robot device 100.
  • the second robot device 200 transfers a camera image to the first robot device 100.
  • the first robot device 100 receives the information (step S208). For example, the first robot device 100 receives a camera image from the second robot device 200.
  • the first robot device 100 estimates the self-position of the small robot (step S209). For example, the first robot device 100 estimates the position of the second robot device 200, which is a small robot, based on the camera image received from the second robot device 200.
  • the first robot device 100 determines whether or not the localization is successful (step S210). For example, the first robot device 100 determines whether or not the position of the second robot device 200 can be estimated based on the camera image received from the second robot device 200. If the localization is not successful (step S210: No), the first robot device 100 ends the localization process with the information received in step S208. For example, when the position of the second robot device 200 cannot be estimated based on the camera image received from the second robot device 200, the first robot device 100 estimates the position of the second robot device 200 from the camera image. End the process.
  • the first robot device 100 updates the risk map (step S211). For example, the first robot device 100 updates the risk map based on the camera image received from the second robot device 200.
  • the first robot device 100 controls the position of the large robot (step S212). For example, the first robot device 100 performs control based on the estimated self-position.
  • the first robot device 100 captures a camera image (step S213).
  • the first robot device 100 captures an image by the image sensor 141.
  • the first robot device 100 estimates the self-position of the large robot (step S214). For example, the first robot device 100 estimates the position of the first robot device 100, which is a large robot, based on the captured camera image.
  • the first robot device 100 recognizes an object (step S215).
  • the first robot device 100 recognizes an object included in the image.
  • the first robot device 100 updates the risk map (step S211). For example, the first robot device 100 updates the risk map based on the captured camera image. Then, the first robot device 100 performs the process of step S212.
  • the information processing system 1 discovers a child by using the face and human recognition of the small robot (second robot device 200) in addition to the object recognition of the large robot (first robot device 100). It can be tracked. As a result, the information processing system 1 can appropriately search for a child by controlling the movement of each robot when searching for the child.
  • FIG. 8 is a diagram showing another example of the search process in the information processing system. Specifically, FIG. 8 shows another example of the search process in the information processing system 1.
  • the step numbers shown in FIG. 8 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIG. 7 will be omitted as appropriate.
  • the second robot device 200 captures a camera image (step S301).
  • the second robot device 200 recognizes a face or a person (step S302).
  • the second robot device 200 recognizes the object (step S303). For example, the second robot device 200 recognizes an object included in the image.
  • step S304: Yes the second robot device 200 estimates the position of the face (step S305).
  • step S304: No the second robot device 200 performs the process of step S306 without performing the process of step S305.
  • the second robot device 200 performs self-position control (step S306).
  • the second robot device 200 performs the process of step S301 and transfers information (step S307).
  • the first robot device 100 receives the information (step S308).
  • the first robot device 100 estimates the self-position of the small robot (step S309).
  • step S310: No If the localization is not successful (step S310: No), the first robot device 100 ends the localization process with the information received in step S308.
  • step S310: Yes the first robot device 100 updates the risk map (step S311).
  • the first robot device 100 controls the position of the large robot (step S312) and performs the process of step S316.
  • the first robot device 100 captures a camera image (step S313).
  • the first robot device 100 estimates the self-position of the large robot (step S314).
  • the first robot device 100 recognizes the object (step S315).
  • the first robot device 100 updates the risk map (step S311). For example, the first robot device 100 updates the risk map based on the captured camera image. Then, the first robot device 100 performs the process of step S312 and the process of step S316.
  • the first robot device 100 plans the route of the small robot (step S316). For example, the first robot device 100 generates a route plan for the second robot device 200 based on the risk map.
  • the first robot device 100 transfers information (step S317).
  • the first robot device 100 transfers the generated route plan to the second robot device 200.
  • the second robot device 200 receives the information (step S318).
  • the second robot device 200 receives a route plan from the first robot device 100.
  • the second robot device 200 controls its own position based on the received route plan (step S306). For example, the second robot device 200 controls to move a route based on the route plan received from the first robot device 100.
  • the information processing system 1 grasps and searches the movements of the small robot and the large robot (first robot device 100). You can share the parts. As a result, the information processing system 1 can appropriately search for a child by controlling the movement of each robot when searching for the child.
  • FIG. 9 is a diagram showing an example of accidental ingestion suppression processing in an information processing system. Specifically, FIG. 9 shows an example of accidental ingestion suppression processing in the information processing system 1.
  • the step numbers shown in FIG. 9 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIGS. 7 and 8 will be omitted as appropriate.
  • the second robot device 200 captures a camera image (step S401).
  • the second robot device 200 recognizes a face or a person (step S402).
  • the second robot device 200 recognizes the object (step S403).
  • the second robot device 200 recognizes an object included in the image.
  • the second robot device 200 estimates the position of the face (step S404). For example, the second robot device 200 estimates the position of a person's face included in the image. The second robot device 200 performs self-position control (step S405).
  • the second robot device 200 makes a danger determination (step S406). For example, the second robot device 200 determines whether or not a person included in the image may accidentally swallow it. For example, the second robot device 200 determines whether another object is located near the human face included in the image.
  • step S407 determines that there is a danger (step S407: Yes)
  • the second robot device 200 issues an alert (step S408). Then, the second robot device 200 performs the process of step S409.
  • step S407 determines that there is no danger (step S407: No)
  • the second robot device 200 performs the process of step S409 without performing the process of step S408.
  • the second robot device 200 transfers information (step S409).
  • the second robot device 200 transfers a camera image to the first robot device 100.
  • the second robot device 200 issues an alert
  • the second robot device 200 transfers the information including the alert to the first robot device 100.
  • the second robot device 200 transfers information indicating that an alert has been issued to the first robot device 100.
  • the first robot device 100 receives the information (step S410).
  • the first robot device 100 receives information indicating that the alert has been issued.
  • the first robot device 100 confirms the alert (step S411).
  • the first robot device 100 confirms whether the information received from the second robot device 200 includes information indicating that an alert has been issued.
  • the first robot device 100 retracts the foreign matter from the hand (step S412).
  • the information received from the second robot device 200 includes information indicating that an alert has been issued
  • the first robot device 100 evacuates the foreign matter from the hand of the monitoring target.
  • the first robot device 100 estimates the self-position of the small robot (step S413).
  • the first robot device 100 updates the risk map (step S414) and performs the process of step S418.
  • the first robot device 100 captures a camera image (step S415).
  • the first robot device 100 estimates the self-position of the large robot (step S416).
  • the first robot device 100 recognizes an object (step S417).
  • the first robot device 100 updates the risk map (step S414).
  • the first robot device 100 controls the position of the large robot (step S418).
  • the first robot device 100 makes a danger determination (step S419). For example, the first robot device 100 determines whether there is a danger related to the monitored object. The first robot device 100 determines whether or not there is a danger related to the monitoring target included in the image.
  • step S420 When the first robot device 100 determines that there is a danger (step S420: Yes), the first robot device 100 evacuates the target from the danger (step S421). The first robot device 100 evacuates the monitored object from danger. Then, the first robot device 100 repeats the process of step S415.
  • step S420 When the first robot device 100 determines that there is no danger (step S420: No), the first robot device 100 ends the process of determining the danger based on the information received in step S410. Then, the first robot device 100 repeats the process of step S415.
  • the information processing system 1 performs the accidental ingestion suppression process, and when the accidental ingestion is likely to occur, the small robot (second robot device 200) issues an alert while the large robot (first robot device 100). Information can be sent to and prevented by the arm of a large robot. As a result, the information processing system 1 detects that the monitored object is trying to carry an object that is in danger of being accidentally swallowed in the mouth, and in other cases, by excluding the object or the like, accidentally swallowed by the monitored object. Can be suppressed.
  • FIG. 10 is a diagram showing an example of evacuation guidance processing in the information processing system. Specifically, FIG. 10 shows an example of evacuation guidance processing in the information processing system 1.
  • the step numbers shown in FIG. 10 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIGS. 7 to 9 will be omitted as appropriate.
  • the second robot device 200 captures a camera image (step S501).
  • the second robot device 200 recognizes a face or a person (step S502).
  • the second robot device 200 estimates the position of the face (step S503).
  • the second robot device 200 performs the process of step S506.
  • the second robot device 200 performs self-position control (step S504).
  • the second robot device 200 evacuates while making a sound that the child is interested in (step S505).
  • the second robot device 200 moves away from the monitoring target while outputting voice, so that the monitoring target evacuates so as to follow the second robot device 200.
  • the second robot device 200 transfers information (step S506).
  • the first robot device 100 receives the information (step S507).
  • the first robot device 100 estimates the self-position of the small robot (step S508).
  • the first robot device 100 updates the risk map (step S509) and performs the process of step S513.
  • the first robot device 100 captures a camera image (step S510).
  • the first robot device 100 estimates the self-position of the large robot (step S511).
  • the first robot device 100 recognizes the object (step S512).
  • the first robot device 100 updates the risk map (step S509).
  • the first robot device 100 controls the position of the large robot (step S513).
  • the first robot device 100 makes a danger determination (step S514). For example, the first robot device 100 determines whether there is a danger related to the monitored object. The first robot device 100 determines whether or not there is a danger related to the monitoring target included in the image.
  • step S515 When the first robot device 100 determines that there is no danger (step S515: No), the first robot device 100 ends the process of determining the danger based on the information received in step S507. Then, the first robot device 100 repeats the process of step S510.
  • the first robot device 100 determines that there is a danger (step S515: Yes)
  • the first robot device 100 plans the route of the small robot (step S516). For example, the first robot device 100 generates a route plan for the second robot device 200 based on the risk map so that the second robot device 200 evacuates while making a sound that the child is interested in.
  • the first robot device 100 transfers information (step S517).
  • the first robot device 100 transfers the generated route plan to the second robot device 200.
  • the second robot device 200 receives the information (step S518).
  • the second robot device 200 receives a route plan from the first robot device 100.
  • the second robot device 200 controls its own position based on the received route plan (step S504). For example, the second robot device 200 evacuates while making a sound that the child is interested in based on the route plan received from the first robot device 100 (step S505).
  • the information processing system 1 creates a route plan for the small robot (second robot device 200) when the large robot (first robot device 100) determines the danger by performing the evacuation guidance process. You can issue commands, etc. As a result, the information processing system 1 can appropriately guide the child to evacuate by causing the small robot to evacuate when the child is under the vase and is in danger.
  • FIG. 11 is a diagram showing an example of a plan update process in the information processing system. Specifically, FIG. 11 shows an example of the plan update process in the information processing system 1.
  • the step numbers shown in FIG. 11 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIGS. 7 to 10 will be omitted as appropriate.
  • the second robot device 200 captures a camera image (step S601).
  • the second robot device 200 transfers information (step S602).
  • the first robot device 100 receives the information (step S603).
  • the first robot device 100 estimates the self-position of the small robot (step S604).
  • the first robot device 100 recognizes an obstacle (step S605). For example, the first robot device 100 recognizes an obstacle based on the information received from the second robot device 200.
  • the first robot device 100 updates the risk map (step S606). For example, the first robot device 100 updates the risk map based on the information received from the second robot device 200 and the recognized obstacle information.
  • the first robot device 100 captures a camera image (step S607).
  • the first robot device 100 estimates the self-position of the large robot (step S608).
  • the first robot device 100 recognizes the object (step S609).
  • the first robot device 100 updates the risk map (step S606) and controls the position of the large robot (step S610).
  • the first robot device 100 performs a route plan for the small robot (step S611). For example, the first robot device 100 generates a route plan for the second robot device 200 so that the second robot device 200 moves while avoiding obstacles based on the risk map.
  • the first robot device 100 transfers information (step S612).
  • the second robot device 200 receives the information (step S613).
  • the second robot device 200 controls its own position based on the received route plan (step S614).
  • the second robot device 200 makes a collision determination (step S615).
  • the second robot device 200 determines whether or not there is an object that collides with the robot device 200 when it moves along the route plan or that is close to the object within a predetermined range.
  • the second robot device 200 transfers information (step S602).
  • the second robot device 200 transfers a camera image taken when the robot device 200 moves along the route plan. For example, the second robot device 200 transfers information indicating an object that collides with or approaches an object within a predetermined range when moving along a route plan.
  • the information processing system 1 receives the collision determination of the small robot (second robot device 200) by performing the plan update process, and the large robot (first robot device 100) updates the risk map. And the route plan of the small robot can be updated. As a result, the information processing system 1 can appropriately update the risk map and the route of the small robot even when the small robot collides with an obstacle while moving on the planned route.
  • FIG. 12 is a diagram showing an example of rescue processing in an information processing system. Specifically, FIG. 12 shows an example of rescue processing in the information processing system 1.
  • the step numbers shown in FIG. 12 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIGS. 7 to 11 will be omitted as appropriate.
  • the second robot device 200 captures a camera image (step S701).
  • the second robot device 200 detects by the acceleration sensor (step S702).
  • the second robot device 200 detects the acceleration of the second robot device 200 by an acceleration sensor.
  • the second robot device 200 detects lifting (step S703).
  • the second robot device 200 detects whether or not the second robot device 200 has been lifted based on the captured camera image and the information detected by the acceleration sensor. For example, when the second robot device 200 moves in a predetermined direction (for example, upward direction) at a predetermined speed or higher, the second robot device 200 determines that the second robot device 200 has been lifted.
  • a predetermined direction for example, upward direction
  • the second robot device 200 transfers information (step S704).
  • the first robot device 100 receives the information (step S705).
  • the first robot device 100 estimates the self-position of the small robot (step S706).
  • the first robot device 100 updates the risk map (step S707), controls the position of the large robot (step S709), and plans the route of the small robot (step S711).
  • the first robot device 100 captures a camera image (step S708).
  • the first robot device 100 controls the position of the large robot (step S709). For example, the first robot device 100 moves to the position of the second robot device 200.
  • the first robot device 100 rescues the small robot (step S710).
  • the first robot device 100 rescues the second robot device 200 by carrying the second robot device 200 to a predetermined position or adjusting the second robot device 200 to a predetermined posture.
  • the first robot device 100 performs a route plan for the small robot (step S711).
  • the first robot device 100 transfers information (step S712).
  • the second robot device 200 receives the information.
  • the second robot device 200 controls its own position based on the received route plan (step S714).
  • the second robot device 200 moves according to the control of its own position and transfers information (step S704).
  • the second robot device 200 transfers a camera image taken when the robot device 200 moves along the route plan.
  • the large robot (first robot device 100) makes the small robot based on the lift detection result and the image of the small robot (second robot device 200) by performing the rescue process. You can find it and drop it on the ground. As a result, the information processing system 1 can appropriately rescue the small robot even if the small robot is lifted by a child and placed somewhere.
  • FIG. 13 is a diagram showing an example of recognition of a monitoring target in an information processing system. Each process shown in FIG. 13 may be performed by any device included in the information processing system 1, such as the first robot device 100 and the second robot device 200.
  • the second robot device 200 may recognize the person or the object.
  • the first robot device 100 may recognize the person or the object.
  • the second robot device 200 may estimate the position of the face.
  • the second robot device 200 may estimate the position of the face.
  • the first robot device 100 may estimate the position of the face.
  • the first robot device 100 and the second robot device 200 may cooperate with each other to perform each process.
  • the information processing system 1 recognizes a person (step S21).
  • the information processing system 1 recognizes a person included in the image.
  • the information processing system 1 recognizes a person included in the image IM21.
  • the information processing system 1 recognizes a person included in the image IM 21 as a monitoring target TG.
  • the information processing system 1 estimates the position of the face (step S22).
  • the information processing system 1 estimates the area in which the human face is located in the image.
  • the information processing system 1 estimates the position of the face of the monitored TG, which is a person included in the image IM21.
  • the information processing system 1 estimates that the face FC of the monitored TG is located in the region AR1 in the upper center of the image IM21.
  • the information processing system 1 recognizes the object (step S23).
  • the information processing system 1 recognizes an object (object) included in the image.
  • the information processing system 1 recognizes the object OB21 included in the image IM21.
  • the information processing system 1 recognizes that the object OB21 is located in the region AR2 at the lower center of the image IM21.
  • the information processing system 1 recognizes that the area AR2 of the object OB21 is located at a position overlapping the area AR1 of the face FC.
  • the information processing system 1 estimates that the object OB21 is located near the face FC of the monitored TG who is a person. Therefore, when the object OB21 is an object that may be accidentally swallowed, the information processing system 1 determines that the monitored TG has a risk of accidentally swallowing the object OB21. Therefore, the information processing system 1 executes a process of suppressing the occurrence of a risk that the monitored TG accidentally swallows the object OB21.
  • the first robot device 100 suppresses the risk of the monitored TG accidentally swallowing the object OB 21 by grasping the hand of the monitored TG by the operation unit 16 and suppressing the movement of the hand of the monitored TG. To do. Further, for example, the first robot device 100 removes the object OB21 from the hand of the monitored TG by the operation unit 16, thereby suppressing the occurrence of the risk that the monitored TG accidentally swallows the object OB21.
  • FIG. 14 is a diagram showing an example of classification of objects in an information processing system.
  • FIG. 15 is a diagram showing an example of updating the risk map in the information processing system.
  • FIG. 16 is a diagram showing an example of a conceptual diagram of updating a risk map in an information processing system. The same points as in FIG. 1 will not be described.
  • the information processing system 1 classifies the objects OB1 to OB7 located in the space SP according to their attributes (step S31). For example, the information processing system 1 classifies the objects OB1 to OB7 according to the ease of changing the position and posture (hereinafter, also referred to as “arrangement mode”) of the objects. For example, the information processing system 1 has two categories of objects OB1 to OB7, a category "CT1 (Rigid)" in which it is difficult to change the arrangement mode and a category “CT2 (Moving)” in which the arrangement mode can be easily changed. Classify into.
  • CT2 Moving
  • CT3 Easy
  • CT4 sparly easy
  • Etc. may be classified into a plurality of levels.
  • the information processing system 1 since it is difficult for the information processing system 1 to change the arrangement mode of the objects classified into the category CT1, it may be estimated that the arrangement mode is not updated when the risk map is updated. Further, since the information processing system 1 can easily change the arrangement mode of the objects classified into the category CT2, it may be estimated that the arrangement mode may be updated when the risk map is updated. Then, the information processing system 1 may update the risk map by updating the arrangement mode of the objects classified in the category CT2 based on the arrangement mode of the objects classified in the category CT1.
  • the objects may be classified into various categories according to their attributes, and the number of categories may be three or more.
  • objects may be categorized based on various attributes such as size, weight, hardness, height of placement position, and the like.
  • the information processing system 1 may manage the monitored TG in the space SP as a category "CT0 (Tracking)" different from other objects OB1 to OB7 and the like.
  • the information processing system 1 classifies the object OB1 into the category CT1.
  • the information processing system 1 classifies the object OB1, which is a sink installed in the space SP, into the category CT1.
  • the information processing system 1 classifies each object into categories using images and information on each object.
  • the information processing system 1 classifies the object OB2 into the category CT2.
  • the information processing system 1 classifies the object OB2, which is a table arranged in the space SP, into the category CT2. Further, the information processing system 1 classifies the object OB3 into the category CT2.
  • the information processing system 1 classifies the object OB3, which is a sofa arranged in the space SP, into the category CT2.
  • FIG. 15 shows a risk map before and after the update.
  • the information processing system 1 updates the risk map MP1 (step S41). For example, the first robot device 100 updates the risk map MP1 based on the collected images and the like. The first robot device 100 updates the risk map MP1 based on an image detected by the image sensor 141, an image acquired from the second robot device 200, and the like. In the example of FIG. 15, the information processing system 1 updates the risk map MP1 to the risk map MP2.
  • the information processing system 1 since it is difficult for the information processing system 1 to change the arrangement mode of the objects classified into the category CT1, it is estimated that the arrangement mode is not updated when the risk map is updated. Further, since the information processing system 1 can easily change the arrangement mode of the objects classified into the category CT2, it is estimated that the arrangement mode may be updated when the risk map is updated. Then, the information processing system 1 updates the risk map by updating the arrangement mode of the objects classified in the category CT2 based on the arrangement mode of the objects classified in the category CT1.
  • the information processing system 1 estimates the arrangement mode of the objects OB1 to OB7 and the like based on the image detected by the image sensor 141 and the image acquired from the second robot device 200. In the example of FIG. 15, it is estimated that the information processing system 1 has changed the arrangement mode of the objects OB2 to OB5. For example, the information processing system 1 estimates the arrangement mode of the objects OB2 to OB7 and the like belonging to the category CT2 based on the objects (object OB1 and the like) classified into the category CT1 whose arrangement mode is difficult to change. Then, the information processing system 1 updates the risk map MP1 to the risk map MP2 based on the estimation result.
  • the information processing system 1 estimates that the objects OB2 and OB3 are tilted and the positions of the objects OB4 and OB5 are moved, and updates the risk map MP1 to the risk map MP2. In this way, the information processing system 1 classifies the objects from the images captured by the robot and updates the risk map.
  • the information processing system 1 may change the map update rate depending on the attribute.
  • the attribute (category) whose arrangement mode can be changed is a plurality of levels such as "CT2 (equivalently prepared)", “CT3 (easy)", and “CT4 (slightly prepared)”
  • CT2 Equivalently prepared
  • CT3 Easy
  • CT4 lightly prepared
  • the information processing system 1 is used.
  • the update rate of the object arrangement mode may be changed.
  • the information processing system 1 may increase the change rate of the arrangement mode for the objects in the category in which the arrangement mode can be easily changed.
  • the object of the category CT2 may have a higher update rate of the arrangement mode
  • the object of the category CT4 may have a lower update rate of the arrangement mode.
  • the information processing system 1 may change the map update rate according to the attributes by appropriately using various methods.
  • FIG. 16 shows an example of the processing flow for updating the risk map.
  • Each process shown in FIG. 16 may be performed by any device included in the information processing system 1, such as the first robot device 100 and the second robot device 200. Further, with respect to the processing shown in FIG. 16, the same points as those in FIGS. 1 and 7 to 15 and the like will be omitted as appropriate. Further, the step numbers shown in FIG. 16 are for explaining the processing (reference numerals) and do not indicate the order of the processing.
  • the information processing system 1 specifies an inviolable target area (step S801).
  • the information processing system 1 may accept the designation of the inviolable target area from the administrator of the information processing system 1 or the like, or may automatically specify the inviolable target area using an image or the like.
  • the information processing system 1 maps the environmental information (step S802).
  • the information processing system 1 estimates the environmental information from the sensor image and registers the estimated environmental information in the object database. For example, the information processing system 1 estimates information on the arrangement mode of an object or the like from an image, and registers the estimated object information in an object DB.
  • the information processing system 1 acquires a sensor image (step S803).
  • the information processing system 1 acquires images detected by the first robot device 100 and the second robot device 200.
  • the information processing system 1 classifies objects (step S804).
  • the information processing system 1 classifies objects by using the sensor image and the information of each object registered in the object DB.
  • the information processing system 1 classifies objects into categories CT1, CT2, and the like.
  • the information processing system 1 estimates the position of the object (step S805). Then, the information processing system 1 updates the risk level map (step S806). The information processing system 1 updates the risk map of the risk map DB based on the estimated position of the object.
  • the information processing system 1 estimates the position and pose (posture) of the target (step S807). For example, the information processing system 1 estimates the position and orientation of the monitored TG.
  • the information processing system 1 checks prohibited items (step S808).
  • the information processing system 1 checks whether there is a corresponding prohibited item based on the estimated position and posture of the monitored TG and the risk map.
  • the information processing system 1 checks whether there is a prohibited item that prohibits the behavior to be monitored because a danger may occur.
  • the information processing system 1 blocks intrusion (step S809).
  • the information processing system 1 blocks the intrusion into the dangerous area of the monitoring target when the prohibited item for prohibiting the action of the monitoring target is applicable.
  • the information processing system 1 estimates the self-position of the robot (step S810).
  • the information processing system 1 estimates the positions of the first robot device 100 and the second robot device 200 robot.
  • the information processing system 1 updates the self-position of the robot (step S811).
  • the information processing system 1 updates the positions of the first robot device 100 and the second robot device 200 robot.
  • the information processing system 1 performs the process of step S809 based on the updated positions of the first robot device 100 and the second robot device 200 robot.
  • the information processing system 1 can update the risk map even if the position of an object such as a chair changes by estimating the environmental information from the sensor image and updating it.
  • FIG. 1 An example of monitoring a monitoring target by one first robot device 100 and one second robot device 200 is shown, but one first robot device is used for a plurality of second robot devices. 100 may be associated.
  • one second robot device 200 tracks and monitors each monitoring target, and one first robot device 100 collects information from each second robot device 200. Therefore, the entire plurality of monitoring targets may be monitored.
  • the second robot device is associated with one monitoring target, but the first robot device 100 may be associated with a plurality of monitoring targets. That is, the information processing system 1 may include a combination of one first robot device 100 and a plurality of second robot devices 200.
  • each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or part of the device is functionally or physically distributed / physically in arbitrary units according to various loads and usage conditions. Can be integrated and configured.
  • the information processing system includes a moving means (moving unit 15 in the embodiment) and a means for operating an object (operation unit 16 in the embodiment).
  • a second robot (the second robot in the embodiment) that has a first robot (first robot device 100 in the embodiment) and a moving means (moving unit 25 in the embodiment) and tracks a monitored object to be monitored.
  • An information processing system including the device 200), wherein the second robot transmits an image of a monitoring target captured by an image sensor (image sensor 241 in the embodiment) to the first robot, and the first robot Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, a process for avoiding the occurrence of the danger is executed.
  • the second robot that tracks the monitored object and the first robot that has the means to operate the object cooperate with each other to avoid the occurrence of danger due to the behavior of the monitored object.
  • the processing for this it is possible to appropriately monitor the monitoring target without attaching a sensor to the structure in the space.
  • the monitoring target is children or pets.
  • the information processing system can appropriately manage the safety of children or pets as monitoring targets. That is, in the information processing system, the second robot and the first robot cooperate with each other to perform a process for avoiding the occurrence of danger caused by the behavior of a child or a pet, thereby attaching a sensor to a structure in the space. Without having to be able to properly manage the safety of children or pets.
  • the monitoring target is located in an indoor living environment.
  • the information processing system can appropriately monitor the monitoring target located in the indoor living environment. That is, the information processing system cooperates with the second robot and the first robot to execute a process for avoiding the occurrence of danger due to the behavior of the monitored object located in the indoor living environment, thereby performing indoors. It is possible to appropriately monitor a monitoring target located in an indoor living environment without attaching a sensor to a structure in the living environment.
  • the second robot is a small robot that is smaller in size than the first robot, which is a large robot.
  • the information processing system can reduce the possibility that the second robot, which is smaller than the first robot, tracks the monitored object and thus cannot track the monitored object. Therefore, since the information processing system can reduce the possibility of losing sight of the monitored object, it is possible to appropriately monitor the monitored object without attaching a sensor to the structure in the space.
  • the first robot determines the danger to the monitored target due to the behavior of the monitored target.
  • the information processing system can appropriately avoid the danger to the monitored target by determining the danger to the monitored target due to the behavior of the monitored target. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system can appropriately manage the safety of the monitored object.
  • the first robot determines the danger that extends to other than the monitored target due to the behavior of the monitored target.
  • the information processing system can appropriately avoid the danger that extends to the non-monitored target by determining the danger that extends to the non-monitored target due to the behavior of the monitored target. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system can appropriately manage the safety of objects other than those to be monitored.
  • the first robot determines the danger caused by the action on the object to be monitored.
  • the information processing system can appropriately avoid the danger caused by the behavior of the monitored object by determining the danger caused by the behavior of the monitored object. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot determines the danger caused by contact with the object to be monitored.
  • the information processing system can appropriately avoid the danger caused by the contact with the monitored object by determining the danger caused by the contact with the monitored object. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot determines the danger caused by grasping the object to be monitored.
  • the information processing system can appropriately avoid the danger caused by the gripping of the monitored object by determining the danger caused by the gripping of the monitored object. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot determines the danger caused by the movement of the position to be monitored.
  • the information processing system can appropriately avoid the danger caused by the movement of the position of the monitoring target by determining the danger caused by the movement of the position of the monitoring target. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot determines the danger caused by the intrusion of the monitored object into the area where the danger is predicted to occur.
  • the information processing system determines the danger caused by the intrusion of the monitored target into the area where the danger is predicted to occur, and thereby the danger caused by the intrusion of the monitored target into the area where the danger is predicted to occur. Can be avoided appropriately. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot executes the operation of the object by the operating means.
  • the information processing system determines the danger caused by the behavior of the monitored target, and when it is determined that the occurrence of the danger caused by the behavior of the monitored target is predicted, the information processing system executes the operation of the object by the operating means.
  • Danger can be avoided appropriately by operating the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot executes an operation on the monitored object by the operating means.
  • the information processing system can appropriately avoid danger by operating the operating means by executing the operation on the monitored object by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot executes an operation of moving the monitoring target by the operating means.
  • the information processing system can appropriately avoid danger by operating the operating means by executing the operation of moving the monitoring target by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot executes an operation of evacuating the monitored object from the area where the occurrence of danger is predicted by the operating means.
  • the information processing system can appropriately avoid the danger by operating the operating means by executing the operation of evacuating the monitored object from the area where the occurrence of the danger is predicted by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system can appropriately manage the safety of the monitored target by evacuating the monitored target from the area where the occurrence of danger is predicted.
  • the first robot executes an operation of suppressing the behavior of the monitored object by the operating means.
  • the information processing system can appropriately avoid danger by operating the operating means by executing an operation of suppressing the behavior of the monitored object by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system can appropriately manage the safety of the monitored object by suppressing the behavior of the monitored object, for example, when the monitored object is approaching or touching a dangerous object. can do.
  • the first robot executes an operation of grasping the arm to be monitored by the operating means.
  • the information processing system can appropriately avoid danger by operating the operating means by executing the operation of grasping the arm to be monitored by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system appropriately secures the safety of the monitored object by grasping the arm of the monitored target and suppressing the behavior of swallowing the object, for example, when the monitored target may accidentally swallow the object. Can be made manageable.
  • the first robot determines that the occurrence of danger due to the behavior of the monitored object is predicted, the first robot instructs the second robot to take an action to avoid the occurrence of danger.
  • the information processing system can appropriately avoid the danger by causing the second robot to take an action for avoiding the occurrence of the danger. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • the first robot instructs the second robot to take an action to draw the attention of the monitored object.
  • the information processing system can appropriately avoid the danger by making the second robot pay attention to the monitored object and causing the second robot to take an action for avoiding the occurrence of the danger. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
  • FIG. 17 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of an information processing device such as a first robot device and a second robot device.
  • the computer 1000 includes a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600.
  • Each part of the computer 1000 is connected by a bus 1050.
  • the CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes processing corresponding to various programs.
  • the ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program that depends on the hardware of the computer 1000, and the like.
  • BIOS Basic Input Output System
  • the HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program.
  • the HDD 1400 is a recording medium for recording an information processing program according to the present disclosure, which is an example of program data 1450.
  • the communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet).
  • the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
  • the input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000.
  • the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media).
  • the media is, for example, an optical recording medium such as a DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk), a magneto-optical recording medium such as an MO (Magneto-Optical disk), a tape medium, a magnetic recording medium, or a semiconductor memory.
  • an optical recording medium such as a DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk)
  • a magneto-optical recording medium such as an MO (Magneto-Optical disk)
  • a tape medium such as a magnetic tape
  • magnetic recording medium or a semiconductor memory.
  • the present technology can also have the following configurations.
  • a first robot having a moving means and an operating means for manipulating an object A second robot that has a means of transportation and tracks the monitored object to be monitored, It is an information processing system equipped with The second robot
  • the image of the monitoring target captured by the image sensor is transmitted to the first robot, and the image is transmitted to the first robot.
  • the first robot Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, processing for avoiding the occurrence of the danger is executed.
  • the second robot The information processing system according to any one of (1) to (3), which is a small robot having a size smaller than that of the first robot, which is a large robot.
  • the first robot The information processing system according to any one of (1) to (4), which determines the danger to the monitored object due to the behavior of the monitored object.
  • the first robot The information processing system according to any one of (1) to (4), which determines the danger extending to other than the monitored target due to the behavior of the monitored target.
  • the first robot The information processing system according to any one of (1) to (6), which determines the danger caused by the action on the object to be monitored.
  • the first robot The information processing system according to (7), which determines the danger caused by contact with the object to be monitored.
  • the first robot The information processing system according to (7) or (8), which determines the danger caused by grasping the object to be monitored.
  • the first robot The information processing system according to any one of (1) to (9) for determining the danger caused by the movement of the position of the monitoring target.
  • the first robot The information processing system according to (10), wherein the information processing system determines the danger caused by the intrusion of the monitoring target into an area where the occurrence of the danger is predicted.
  • the first robot The information processing system according to any one of (1) to (11), which executes an operation of an object by the operating means when it is determined that the occurrence of the danger due to the behavior of the monitored object is predicted.
  • the first robot The information processing system according to (12), which executes an operation on the monitored object by the operating means.
  • the first robot The information processing system according to (13), which executes an operation of moving the monitored object by the operating means.
  • the first robot The information processing system according to (14), wherein the operation means is used to evacuate the monitored object from an area where the danger is predicted to occur.
  • the first robot The information processing system according to (13), wherein an operation of suppressing the behavior of the monitored object is executed by the operating means.
  • the first robot The information processing system according to (16), wherein the operation of gripping the arm to be monitored is executed by the operating means.
  • the first robot When it is determined that the occurrence of the danger due to the behavior of the monitoring target is predicted, the second robot is instructed to take an action to avoid the occurrence of the danger (1) to (17).
  • the first robot The information processing system according to (18), wherein the second robot is instructed to take an action for directing the attention of the monitored object.
  • the first robot The information processing system according to (19), wherein the second robot is instructed to output voice.
  • the first robot The information processing system according to (19) or (20), wherein the second robot is instructed to be located within the field of view of the monitored object.
  • the first robot The information processing system according to any one of (1) to (21) for estimating a self-position.
  • the first robot The information processing system according to any one of (1) to (22) for recognizing an object.
  • the first robot The information processing system according to any one of (1) to (23), which generates a risk map related to the danger.
  • the first robot The information processing system according to any one of (1) to (24), which maps the position of the second robot.
  • the second robot The information processing system according to any one of (1) to (25), which recognizes the face to be monitored by the face recognition function.
  • the second robot The information processing system according to any one of (1) to (26), which recognizes the monitored object as a person by the person recognition function.
  • the second robot The information processing system according to any one of (1) to (27) for recognizing an object.
  • Multiple second robots that track each of the multiple monitoring targets, With The first robot A process for determining the danger caused by the behavior of the monitoring target based on the images of the monitoring target received from each of the plurality of second robots, and avoiding the occurrence of the danger based on the determination result.
  • a first robot having a moving means and an operating means for manipulating an object A second robot that has a means of transportation and tracks the monitored object to be monitored, Is the information processing method that The second robot The image of the monitoring target captured by the image sensor is transmitted to the first robot, and the image is transmitted to the first robot.
  • the first robot Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, a process for avoiding the occurrence of the danger is executed.
  • Information processing system 100 1st robot device 11 Communication unit 12 Storage unit 121 Map information storage unit 122 Object information storage unit 123 Danger judgment information storage unit 13 Control unit 131 Acquisition unit 132 Recognition unit 133 Generation unit 134 Estimating unit 135 Judgment unit 136 Planning unit 137 Execution unit 14 Sensor unit 141 Image sensor 15 Moving unit 16 Operation unit 200 2nd robot device 21 Communication unit 22 Storage unit 23 Control unit 231 Acquisition unit 232 Recognition unit 233 Estimating unit 234 Judgment unit 235 Transmission unit 236 Execution unit 24 Sensor unit 241 Image sensor 25 Moving unit 27 Output unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Alarm Systems (AREA)

Abstract

The information processing system according to the present disclosure is provided with: a first robot with a moving means and an operating means for operating an object; and a second robot with a moving means, the second robot tracking a monitoring target which is a target of monitoring. The second robot transmits to the first robot an image of the monitoring target taken by an image sensor. The first robot determines a danger due to a behavior of the monitoring target on the basis of the image of the monitoring target received from the second robot, and executes a process for avoiding an occurrence of the danger on the basis of a result of the determination.

Description

情報処理システム及び情報処理方法Information processing system and information processing method
 本開示は、情報処理システム及び情報処理方法に関する。 This disclosure relates to an information processing system and an information processing method.
 センサにより検知された情報に基づいて、子ども等の監視対象の安全を管理するシステムが知られている。例えば、子どもに対して危険を警報して、子どもを安全な状態に誘導したり、警報音声によって子どもが危険な状態であることを保育者に気付かせたりする(特許文献1)。 A system that manages the safety of monitored objects such as children based on the information detected by the sensor is known. For example, a danger is alerted to a child to guide the child to a safe state, or a caregiver is made aware that the child is in a dangerous state by an alarm sound (Patent Document 1).
特開2015-158846号公報Japanese Unexamined Patent Publication No. 2015-158846
 従来技術によれば、空間の天井に設けられた複数の距離画像センサを用いて、子どもの行動を予測し、所定時間後の子どもが危険な状態かを判定する。 According to the prior art, a plurality of distance image sensors provided on the ceiling of the space are used to predict the behavior of the child and determine whether the child is in a dangerous state after a predetermined time.
 しかしながら、上記の従来技術では、空間の天井等の空間の構造物にセンサを設置する必要が生じる。また、空間の天井にセンサを設置する場合、天井側から見た場合、すなわち平面視した場合における空間の死角に子ども等の監視対象が位置する場合、監視対象の行動を監視することが難しいといった問題がある。そのため、上記の従来技術では、監視対象の安全を適切に管理することができない場合が有り、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることが望まれている。 However, in the above-mentioned conventional technology, it is necessary to install the sensor in the structure of the space such as the ceiling of the space. In addition, when the sensor is installed on the ceiling of the space, it is difficult to monitor the behavior of the monitored object when viewed from the ceiling side, that is, when the monitored object such as a child is located in the blind spot of the space when viewed in a plan view. There's a problem. Therefore, in the above-mentioned conventional technology, it may not be possible to properly manage the safety of the monitored object, and it is desired that the monitored object can be appropriately monitored without attaching a sensor to the structure of the space. ..
 そこで、本開示では、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる情報処理システム及び情報処理方法を提案する。 Therefore, in this disclosure, we propose an information processing system and an information processing method that can appropriately monitor the monitoring target without attaching a sensor to the structure of the space.
 上記の課題を解決するために、本開示に係る一形態の情報処理システムは、移動手段と、物体を操作する操作手段とを有する第1ロボットと、移動手段を有し、監視の対象となる監視対象を追尾する第2ロボットと、を備えた情報処理システムであって、前記第2ロボットは、画像センサにより撮像した前記監視対象の画像を前記第1ロボットに送信し、前記第1ロボットは、前記第2ロボットから受信した前記監視対象の画像に基づいて、前記監視対象の行動に起因する危険を判定し、判定結果に基づいて、前記危険の発生を回避するための処理を実行する。 In order to solve the above problems, the information processing system of one form according to the present disclosure has a moving means, a first robot having an operating means for operating an object, and a moving means, and is subject to monitoring. An information processing system including a second robot that tracks a monitoring target, the second robot transmits an image of the monitoring target captured by an image sensor to the first robot, and the first robot Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, a process for avoiding the occurrence of the danger is executed.
本開示の実施形態に係る情報処理の一例を示す図である。It is a figure which shows an example of information processing which concerns on embodiment of this disclosure. 実施形態に係る情報処理システムの構成例を示す図である。It is a figure which shows the structural example of the information processing system which concerns on embodiment. 実施形態に係る第1ロボット装置の構成例を示す図である。It is a figure which shows the structural example of the 1st robot apparatus which concerns on embodiment. 実施形態に係る第2ロボット装置の構成例を示す図である。It is a figure which shows the structural example of the 2nd robot apparatus which concerns on embodiment. 実施形態に係る情報処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of information processing which concerns on embodiment. 情報処理システムの構成の概念図の一例を示す図である。It is a figure which shows an example of the conceptual diagram of the structure of an information processing system. 情報処理システムにおける探索処理の一例を示す図である。It is a figure which shows an example of the search process in an information processing system. 情報処理システムにおける探索処理の他の一例を示す図である。It is a figure which shows another example of the search process in an information processing system. 情報処理システムにおける誤飲抑制処理の一例を示す図である。It is a figure which shows an example of the accidental ingestion suppression processing in an information processing system. 情報処理システムにおける退避誘導処理の一例を示す図である。It is a figure which shows an example of the evacuation guidance processing in an information processing system. 情報処理システムにおける計画更新処理の一例を示す図である。It is a figure which shows an example of the plan update process in an information processing system. 情報処理システムにおける救出処理の一例を示す図である。It is a figure which shows an example of the rescue process in an information processing system. 情報処理システムにおける監視対象の認識の一例を示す図である。It is a figure which shows an example of recognition of the monitoring target in an information processing system. 情報処理システムにおけるオブジェクトの分類の一例を示す図である。It is a figure which shows an example of the classification of an object in an information processing system. 情報処理システムにおける危険度マップの更新の一例を示す図である。It is a figure which shows an example of the update of the risk degree map in an information processing system. 情報処理システムにおける危険度マップの更新の概念図の一例を示す図である。It is a figure which shows an example of the conceptual diagram of the update of the risk degree map in an information processing system. 第1ロボット装置や第2ロボット装置の機能を実現するコンピュータの一例を示すハードウェア構成図である。It is a hardware block diagram which shows an example of the computer which realizes the function of the 1st robot apparatus and the 2nd robot apparatus.
 以下に、本開示の実施形態について図面に基づいて詳細に説明する。なお、この実施形態により本願にかかる情報処理システム及び情報処理方法が限定されるものではない。また、以下の各実施形態において、同一の部位には同一の符号を付することにより重複する説明を省略する。 Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. The information processing system and the information processing method according to the present application are not limited by this embodiment. Further, in each of the following embodiments, duplicate description will be omitted by assigning the same reference numerals to the same parts.
 以下に示す項目順序に従って本開示を説明する。
  1.実施形態
   1-1.本開示の実施形態に係る情報処理の概要
    1-1-1.危険について
    1-1-2.危険判定について
    1-1-3.大小ロボットの組合せのメリット等
   1-2.実施形態に係る情報処理システムの構成
   1-3.実施形態に係る第1ロボット装置の構成
   1-4.実施形態に係る第2ロボット装置の構成
   1-5.実施形態に係る情報処理の手順
   1-6.情報処理システムの構成の概念図
   1-7.情報処理システムの処理例
    1-7-1.探索処理の例
    1-7-2.探索処理の他の例
    1-7-3.誤飲抑制処理の例
    1-7-4.退避誘導処理の例
    1-7-5.計画更新処理の例
    1-7-6.救出処理の例
   1-8.監視対象の認識例
   1-9.マップの更新例
    1-9-1.オブジェクトの分類
    1-9-2.危険度マップの更新
  2.その他の実施形態
   2-1.その他の構成例
   2-2.その他
  3.本開示に係る効果
  4.ハードウェア構成
The present disclosure will be described according to the order of items shown below.
1. 1. Embodiment 1-1. Outline of information processing according to the embodiment of the present disclosure 1-1-1. About danger 1-1-2. Danger judgment 1-1-3. Advantages of combining large and small robots 1-2. Configuration of Information Processing System According to Embodiment 1-3. Configuration of the first robot device according to the embodiment 1-4. Configuration of the Second Robot Device According to the Embodiment 1-5. Information processing procedure according to the embodiment 1-6. Conceptual diagram of the configuration of the information processing system 1-7. Processing example of information processing system 1-7-1. Example of search processing 1-7-2. Other examples of search processing 1-7-3. Example of accidental ingestion suppression treatment 1-7-4. Example of evacuation guidance processing 1-7-5. Example of plan update process 1-7-6. Example of rescue processing 1-8. Recognition example of monitoring target 1-9. Map update example 1-9-1. Classification of objects 1-9-2. Update of risk map 2. Other Embodiments 2-1. Other configuration examples 2-2. Others 3. Effect of this disclosure 4. Hardware configuration
[1.実施形態]
[1-1.本開示の実施形態に係る情報処理の概要]
 図1は、本開示の実施形態に係る情報処理の一例を示す図である。本開示の実施形態に係る情報処理は、図1に示す第1ロボットである第1ロボット装置100及び第2ロボットである第2ロボット装置200を含む情報処理システム1(図2参照)によって実現される。
[1. Embodiment]
[1-1. Outline of information processing according to the embodiment of the present disclosure]
FIG. 1 is a diagram showing an example of information processing according to the embodiment of the present disclosure. The information processing according to the embodiment of the present disclosure is realized by the information processing system 1 (see FIG. 2) including the first robot device 100 which is the first robot and the second robot device 200 which is the second robot shown in FIG. To.
 情報処理システム1に含まれる第1ロボット装置100や第2ロボット装置200は、実施形態に係る情報処理を実行する。第1ロボット装置100は、移動手段(移動部15)や物体を操作する操作手段(操作部16)を有し、監視対象やオブジェクトに対する操作が可能な大型のロボット(「大ロボット」ともいう)である。また、第2ロボット装置200は、移動手段(移動部25)を有し、第1ロボット装置100よりもサイズが小さく、第1ロボット装置100よりも狭い場所(空間)へ移動可能な小型のロボット(小ロボット)である。 The first robot device 100 and the second robot device 200 included in the information processing system 1 execute information processing according to the embodiment. The first robot device 100 has a moving means (moving unit 15) and an operating means (operating unit 16) for operating an object, and is a large robot (also referred to as a “large robot”) capable of operating a monitored object or an object. Is. Further, the second robot device 200 has a moving means (moving unit 25), is smaller in size than the first robot device 100, and is a small robot that can move to a place (space) narrower than the first robot device 100. (Small robot).
 図1の例では、例えば住宅のリビング等、屋内居住環境である空間SPに位置する赤ちゃんを監視対象TGとして、監視対象TGの行動に起因する危険を判定し、判定結果に基づいて、危険の発生を回避するための処理を実行する場合を示す。なお、図1の例では、監視対象が赤ちゃんである場合を一例として示すが、監視対象は赤ちゃんに限らず、赤ちゃんよりも大きな子供やペット等、種々の対象であってもよい。監視対象は、例えば行動の予測が難しく、言語による行動の抑制が困難な自律移動(行動)する主体であれば、どのような対象であってもよい。また、図1の例では、屋内居住環境である空間SP内に赤ちゃんである監視対象TGが一人だけであり、監視対象TGを監視する監視者(親等)が不在の場合を示すが、監視者が空間SP内に位置したり、空間SPの近くに位置したりする場合であってもよい。 In the example of FIG. 1, a baby located in a space SP which is an indoor living environment such as a living room of a house is set as a monitored TG, and a danger caused by the behavior of the monitored TG is determined, and the danger is determined based on the determination result. The case where the process for avoiding the occurrence is executed is shown. In the example of FIG. 1, the case where the monitoring target is a baby is shown as an example, but the monitoring target is not limited to the baby, and may be various targets such as children and pets larger than the baby. The monitoring target may be any target as long as it is an autonomously moving (behavior) subject whose behavior is difficult to predict and whose behavior is difficult to be suppressed by language. Further, in the example of FIG. 1, there is only one monitored TG that is a baby in the space SP that is an indoor living environment, and there is no observer (relative) who monitors the monitored TG. May be located in the space SP or near the space SP.
 また、図1に示すように、監視対象TGが位置する空間SPには、複数の物体(オブジェクト)が位置する。空間SPには、流し台であるオブジェクトOB1やテーブルであるオブジェクトOB2やソファーであるオブジェクトOB3を含む複数のオブジェクトOB1~OB7等が位置する。なお、図1では、説明のためにオブジェクトOB1~OB7にのみ符号を付するが、空間SPには、オブジェクトOB1~OB7以外にも、例えば右端中央のテレビ等、多数の物体(オブジェクト)が位置し、情報処理システム1により認識される。また、空間SPには、大ロボットである第1ロボット装置100と、監視対象TGを追尾する小ロボットである第2ロボット装置200が位置する。図1の例では、第1ロボット装置100と第2ロボット装置200とが連携して、監視対象TGを監視し、監視対象TGの行動に起因する危険を判定する。 Further, as shown in FIG. 1, a plurality of objects (objects) are located in the space SP where the monitored TG is located. A plurality of objects OB1 to OB7 including an object OB1 which is a sink, an object OB2 which is a table, and an object OB3 which is a sofa are located in the space SP. In FIG. 1, only the objects OB1 to OB7 are designated for the sake of explanation, but in addition to the objects OB1 to OB7, a large number of objects (objects) such as a television in the center of the right end are located in the space SP. Then, it is recognized by the information processing system 1. Further, in the space SP, a first robot device 100, which is a large robot, and a second robot device 200, which is a small robot that tracks the monitored TG, are located. In the example of FIG. 1, the first robot device 100 and the second robot device 200 cooperate with each other to monitor the monitored TG and determine the danger caused by the behavior of the monitored TG.
 まず、第2ロボット装置200は、画像センサ241により検知を行う(ステップS11)。第2ロボット装置200は、監視対象TGを追尾し、監視対象TGを撮像する。第2ロボット装置200は、画像センサ241により画像IM1を撮像する。そして、第2ロボット装置200は、撮像した画像IM1に含まれる人を認識したり、顔の位置を推定したりする。第2ロボット装置200は、人の認識に関する種々の従来技術を適宜用いて、画像センサ241が検知した画像IM1に含まれる人を認識する。例えば、第2ロボット装置200は、一般物体認識等の物体認識に関する種々の技術を適宜用いて、画像センサ241が検知した画像IM1に含まれる人等の各種の物体を認識する。図1の例では、第2ロボット装置200は、画像IM1に含まれる人(赤ちゃん)を監視対象TGとして認識する。なお、第2ロボット装置200が人の認識を行わない場合、第1ロボット装置100が行ってもよい。 First, the second robot device 200 detects by the image sensor 241 (step S11). The second robot device 200 tracks the monitored TG and images the monitored TG. The second robot device 200 captures the image IM1 by the image sensor 241. Then, the second robot device 200 recognizes a person included in the captured image IM1 and estimates the position of the face. The second robot device 200 recognizes a person included in the image IM1 detected by the image sensor 241 by appropriately using various conventional techniques related to human recognition. For example, the second robot device 200 recognizes various objects such as a person included in the image IM1 detected by the image sensor 241 by appropriately using various techniques related to object recognition such as general object recognition. In the example of FIG. 1, the second robot device 200 recognizes a person (baby) included in the image IM1 as a monitoring target TG. If the second robot device 200 does not recognize a person, the first robot device 100 may do so.
 また、第2ロボット装置200は、顔の認識に関する種々の従来技術を適宜用いて、画像IM1に含まれる人の顔の位置を推定する。図1の例では、第2ロボット装置200は、画像IM1に含まれる人(赤ちゃん)である監視対象TGの顔FCの位置を推定する。なお、第2ロボット装置200が顔の位置の推定を行わない場合、第1ロボット装置100が行ってもよい。第2ロボット装置200は、推定される監視対象TGの顔を撮像可能な位置に移動部25により移動し、監視対象TGの顔を撮像可能な位置で監視対象TGを追尾するとともに、監視対象TGを撮像する。 Further, the second robot device 200 estimates the position of the human face included in the image IM1 by appropriately using various conventional techniques related to face recognition. In the example of FIG. 1, the second robot device 200 estimates the position of the face FC of the monitored target TG, which is a person (baby) included in the image IM1. If the second robot device 200 does not estimate the position of the face, the first robot device 100 may do so. The second robot device 200 moves the estimated face of the monitored TG to a position where it can be imaged by the moving unit 25, tracks the monitored TG at a position where the face of the monitored TG can be imaged, and tracks the monitored TG. To image.
 そして、第2ロボット装置200は、画像を第1ロボット装置100に送信する(ステップS12)。第2ロボット装置200は、撮像した監視対象TGを含む画像を第1ロボット装置100に送信する。例えば、第2ロボット装置200は、撮像した監視対象TGの顔FCを含む画像を第1ロボット装置100に送信する。なお、第2ロボット装置200は、監視対象TGを含む画像に限らず、オブジェクトを撮像した画像等、種々の情報を第1ロボット装置100に送信する。 Then, the second robot device 200 transmits an image to the first robot device 100 (step S12). The second robot device 200 transmits an image including the captured monitored TG to the first robot device 100. For example, the second robot device 200 transmits an image including the face FC of the monitored TG imaged to the first robot device 100. The second robot device 200 transmits not only an image including the monitored TG but also various information such as an image obtained by capturing an object to the first robot device 100.
 そして、第1ロボット装置100は、危険度マップを更新する(ステップS13)。第1ロボット装置100は、第2ロボット装置200から取得した画像や画像センサ141により撮像した画像等、種々の情報を適宜用いて危険度マップを更新する。第1ロボット装置100は、危険度マップMP1を更新する。図1の例では、第1ロボット装置100は、危険度マップMP1を生成する。例えば、第1ロボット装置100は、認識したオブジェクトOB1~OB7を含む危険度マップMP1を生成する。例えば、第1ロボット装置100は、自己位置推定の技術により推定された第1ロボット装置100の位置や、第2ロボット装置200の位置や、監視対象TGの位置を含む危険度マップMP1を生成する。例えば、第1ロボット装置100は、SLAM(Simultaneous Localization and Mapping)の機能を有し、SLAMの技術を用いて、自己位置推定を行うとともに、危険度マップMP1等の環境地図の生成を行う。 Then, the first robot device 100 updates the risk map (step S13). The first robot device 100 updates the risk map by appropriately using various information such as an image acquired from the second robot device 200 and an image captured by the image sensor 141. The first robot device 100 updates the risk map MP1. In the example of FIG. 1, the first robot device 100 generates the risk map MP1. For example, the first robot device 100 generates a risk map MP1 including the recognized objects OB1 to OB7. For example, the first robot device 100 generates a risk map MP1 including the position of the first robot device 100 estimated by the self-position estimation technique, the position of the second robot device 200, and the position of the monitored TG. .. For example, the first robot device 100 has a SLAM (Simultaneous Localization and Mapping) function, and uses SLAM technology to estimate its own position and generate an environmental map such as a risk map MP1.
 なお、第1ロボット装置100は、画像に限らず、種々の情報を適宜用いて、危険度マップMP1を生成してもよい。第1ロボット装置100は、第2ロボット装置200に空間SP内を移動させながら、第2ロボット装置200に画像を撮像させ、第2ロボット装置200が撮像した画像を用いて、危険度マップMP1を生成してもよい。第1ロボット装置100は、空間SP内を移動しながら、画像センサ141により画像を撮像し、撮像した画像を用いて、危険度マップMP1を生成してもよい。また、第1ロボット装置100は、測距センサ等による点群を用いて危険度マップMP1を生成してもよい。また、第1ロボット装置100は、オブジェクト情報記憶部122(図3参照)に記憶されたオブジェクトの情報や、情報処理システム1の管理者等により入力されたオブジェクトの情報を基に、オブジェクトを認識してもよい。例えば、第1ロボット装置100は、オブジェクトの情報を基に、各オブジェクトが監視対象TGにとって危険なオブジェクトであるかを判定してもよい。第1ロボット装置100は、危険度マップMP1を基に、後述する危険判定条件を生成してもよい。なお、第1ロボット装置100は、情報処理システム1の管理者等により入力された危険判定条件を用いてもよい。 Note that the first robot device 100 may generate the risk map MP1 by appropriately using various information, not limited to the image. The first robot device 100 causes the second robot device 200 to take an image while moving the second robot device 200 in the space SP, and uses the image taken by the second robot device 200 to create a risk map MP1. It may be generated. The first robot device 100 may capture an image by the image sensor 141 while moving in the space SP, and generate the risk map MP1 using the captured image. Further, the first robot device 100 may generate a risk map MP1 by using a point cloud by a distance measuring sensor or the like. Further, the first robot device 100 recognizes an object based on the object information stored in the object information storage unit 122 (see FIG. 3) and the object information input by the administrator of the information processing system 1. You may. For example, the first robot device 100 may determine whether each object is a dangerous object for the monitored TG based on the object information. The first robot device 100 may generate a danger determination condition described later based on the risk map MP1. The first robot device 100 may use the danger determination conditions input by the administrator of the information processing system 1.
 そして、第1ロボット装置100は、危険判定を行う(ステップS14)。第1ロボット装置100は、第2ロボット装置200から受信した監視対象TGの画像IM1や危険度マップMP1に基づいて、監視対象TGの行動に起因する危険を判定する。なお、第1ロボット装置100は、監視対象TGの行動に起因する種々の危険を判定するが、この点についての詳細は後述する。 Then, the first robot device 100 makes a danger determination (step S14). The first robot device 100 determines the danger caused by the behavior of the monitored TG based on the image IM1 of the monitored TG received from the second robot device 200 and the risk map MP1. The first robot device 100 determines various dangers caused by the behavior of the monitored TG, and details of this point will be described later.
 そして、第1ロボット装置100は、判定結果に基づいて、危険の発生を回避するための処理を実行する(ステップS15)。第1ロボット装置100は、危険があると判定した場合、その危険の発生を回避するための処理を実行する。第1ロボット装置100は、ある危険判定条件を満たす場合、その危険の発生を回避するための処理を実行する。第1ロボット装置100は、危険があると判定した場合、第1ロボット装置100自身が危険の発生を回避するための処理を実行したり、第2ロボット装置200に危険の発生を回避するための行動を行わせたりする。なお、第1ロボット装置100は、危険の内容に応じて種々の危険を回避するための処理を実行するが、この点についての詳細は後述する。 Then, the first robot device 100 executes a process for avoiding the occurrence of danger based on the determination result (step S15). When the first robot device 100 determines that there is a danger, the first robot device 100 executes a process for avoiding the occurrence of the danger. When a certain danger determination condition is satisfied, the first robot device 100 executes a process for avoiding the occurrence of the danger. When the first robot device 100 determines that there is a danger, the first robot device 100 itself executes a process for avoiding the occurrence of the danger, or the second robot device 200 is for avoiding the occurrence of the danger. Make them take action. The first robot device 100 executes processes for avoiding various dangers according to the content of the dangers, and details of this point will be described later.
 上述したように、情報処理システム1は、第1ロボット装置100と第2ロボット装置200とが連携して、監視対象TGを監視し、監視対象TGの行動に起因する危険の発生を抑制する。これにより、情報処理システム1は、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 As described above, in the information processing system 1, the first robot device 100 and the second robot device 200 cooperate with each other to monitor the monitored TG and suppress the occurrence of danger caused by the behavior of the monitored TG. As a result, the information processing system 1 can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
[1-1-1.危険について]
 ここで、危険や危険判定に関連する点について詳述する。まず、危険について例示を用いて説明する。
[1-1-1. About danger]
Here, the points related to danger and danger judgment will be described in detail. First, the danger will be described with an example.
 監視対象の行動に起因する危険は、監視対象が行動することで監視対象自身に降りかかる危険や、監視対象以外に対して降りかかる危険を含む概念である。すなわち、監視対象の行動に起因する危険は、監視対象の行動に起因して監視対象に及ぶ危険と、監視対象の行動に起因して監視対象以外に及ぶ危険の両方が含まれる。 The danger caused by the behavior of the monitored target is a concept that includes the danger of falling on the monitored target itself due to the behavior of the monitored target and the danger of falling on the non-monitored target. That is, the danger caused by the behavior of the monitored target includes both the danger that extends to the monitored target due to the behavior of the monitored target and the danger that extends to the non-monitored target due to the behavior of the monitored target.
 監視対象に及ぶ危険には、監視対象がオブジェクトにぶつかること、監視対象が転落すること、監視対象がやけどをすること、監視対象がオブジェクトを誤飲すること、監視対象がオブジェクトに挟まれること等、監視対象自身に降りかかる種々の危険が含まれる。すなわち、監視対象に及ぶ危険には、監視対象自身に危害が及ぶ危険に対応する。 The dangers that reach the monitored object include that the monitored object hits an object, that the monitored object falls, that the monitored object is burned, that the monitored object accidentally swallows the object, that the monitored object is pinched by the object, etc. , Includes various dangers that fall on the monitored object itself. That is, the danger that affects the monitored object corresponds to the risk that the monitored object itself is harmed.
 また、監視対象以外に及ぶ危険には、監視対象の行動により物(オブジェクト)が破損すること、監視対象の行動により火災が発生すること等、監視対象の行動により監視対象以外の物体や環境に降りかかる種々の危険が含まれる。すなわち、監視対象以外に及ぶ危険には、監視対象以外のものに危害が及ぶ危険に対応する。 In addition, the dangers that extend to non-monitored objects include damage to objects due to the behavior of the monitored object, fires caused by the behavior of the monitored object, etc. Includes various dangers of falling. In other words, the danger that extends to something other than the monitoring target corresponds to the danger that harms something other than the monitoring target.
[1-1-2.危険判定及び回避処理について]
 次に、危険判定及び回避処理について例示を用いて説明する。情報処理システム1は、監視対象自身や監視対象以外のものに危害が及ぶ危険につながる前段階の監視対象の行動(前兆行動)を検知して、監視対象が前兆行動を行った場合、その前兆行動に対応する危険があると判定する。情報処理システム1は、各危険と前兆行動との対応付けた情報を用いて、監視対象が前兆行動を行った場合、その前兆行動に対応付けられた危険があると判定してもよい。そして、情報処理システム1は、前兆行動に対応付けられた危険があると判定した場合、その危険が発生する可能性があると予測して、その危険の発生を回避する処理(回避処理)等を実行する。なお前兆行動は、危険の発生につながる行動であれば、所定のエリアに位置することや、オブジェクトに接触することや、オブジェクトを把持することや、オブジェクトを凝視すること等、種々の種別の行動であってもよい。情報処理システム1は、危険度マップや、監視対象の位置や、監視対象を撮像した画像等のセンサ情報を用いて、監視対象の行動に起因する危険を判定する。例えば、情報処理システム1は、各危険を識別する情報に、その危険を判定する条件(危険判定条件)を対応付けた情報を用いて、各危険が発生する可能性があるかを予測する。
[1-1-2. Danger judgment and avoidance processing]
Next, the risk determination and the avoidance process will be described with reference to the examples. The information processing system 1 detects the behavior (precursor action) of the monitoring target in the previous stage that leads to the danger of harming the monitoring target itself or something other than the monitoring target, and when the monitoring target performs a precursory action, the precursory action. Judge that there is a risk of responding to the action. The information processing system 1 may determine that there is a danger associated with the precursory behavior when the monitored target performs the precursory behavior by using the information associated with each danger and the precursory behavior. Then, when the information processing system 1 determines that there is a danger associated with the precursory behavior, it predicts that the danger may occur and avoids the occurrence of the danger (avoidance processing) or the like. To execute. Omen behaviors are various types of behaviors such as being located in a predetermined area, touching an object, grasping an object, and staring at an object if the behavior leads to the occurrence of danger. It may be. The information processing system 1 determines the danger caused by the behavior of the monitored target by using the sensor information such as the risk map, the position of the monitored target, and the image obtained by capturing the monitored target. For example, the information processing system 1 predicts whether or not each danger may occur by using the information in which the information for identifying each danger is associated with the condition for determining the danger (danger determination condition).
 なお、以下に示す各処理は、第1ロボット装置100や第2ロボット装置200等、情報処理システム1に含まれるいずれの装置が行ってもよい。例えば、第1ロボット装置100が主として危険判定を行い、第2ロボット装置200が一部の危険判定を行う構成であってもよい。例えば、第2ロボット装置200が画像センサ241により撮像した画像を用いて、一時的(簡易的)な監視対象の誤飲の危険を判定し、第1ロボット装置100が第2ロボット装置200から取得した画像や他の情報を用いて、二時的(本格的)な監視対象の誤飲の危険を判定してもよい。 Note that each of the processes shown below may be performed by any device included in the information processing system 1, such as the first robot device 100 and the second robot device 200. For example, the first robot device 100 may mainly perform a risk determination, and the second robot device 200 may perform a partial risk determination. For example, using the image captured by the image sensor 241 by the second robot device 200, the risk of accidental ingestion of a temporary (simple) monitoring target is determined, and the first robot device 100 acquires it from the second robot device 200. The risk of accidental ingestion of a temporary (full-scale) monitored object may be determined by using the image or other information obtained.
 情報処理システム1は、危険なエリア(危険エリア)に位置することを危険判定条件として、監視対象の行動に起因する危険を判定してもよい。情報処理システム1は、監視対象の位置が、危険エリア内にある場合や、危険エリアから所定の範囲(例えば危険エリアから30cm等)内にある場合に危険判定条件を満たすとして、危険が発生する可能性があると判定してもよい。 The information processing system 1 may determine the danger caused by the behavior of the monitored object on the condition that it is located in a dangerous area (danger area). When the position of the monitoring target is within the danger area or within a predetermined range (for example, 30 cm from the danger area) from the danger area, the information processing system 1 satisfies the danger determination condition, and a danger occurs. It may be determined that there is a possibility.
 図1の例では、情報処理システム1は、オブジェクトOB2やオブジェクトOB3が配置された領域を、危険エリアとして、監視対象TGの行動に起因する危険を判定してもよい。情報処理システム1は、監視対象TGが上って、転落する可能性があるオブジェクトOB2やオブジェクトOB3が配置された領域を危険エリアとして、監視対象TGの転落の危険を判定する。例えば、情報処理システム1は、オブジェクトOB3の位置やオブジェクトOB3の位置から所定の範囲内に位置する場合、監視対象TGの転落の危険があると判定する。 In the example of FIG. 1, the information processing system 1 may determine the danger caused by the behavior of the monitored TG, using the area where the object OB2 and the object OB3 are arranged as a danger area. The information processing system 1 determines the risk of the monitored TG falling, using the area where the object OB2 and the object OB3 that may fall after the monitored TG rises as a danger area. For example, when the information processing system 1 is located within a predetermined range from the position of the object OB3 or the position of the object OB3, the information processing system 1 determines that there is a risk of the monitored TG falling.
 情報処理システム1は、監視対象TGの転落の危険があると判定した場合、転落の危険の発生を回避するための処理を実行する。例えば、第1ロボット装置100は、監視対象TGの転落の危険があると判定した場合、操作部16により監視対象TGを危険エリアから退避させる処理を実行する。第1ロボット装置100は、操作部16により監視対象TGを把持し、監視対象TGを危険エリアから離れた位置に運ぶ処理を実行する。また、第1ロボット装置100は、第2ロボット装置200に指示し、第2ロボット装置200に監視対象TGの気を引く行動を行わせて、監視対象TGを危険エリアから離れた位置に移動させてもよい。 When the information processing system 1 determines that there is a risk of the monitored TG falling, it executes a process for avoiding the risk of falling. For example, when the first robot device 100 determines that there is a danger of the monitored TG falling, the operation unit 16 executes a process of evacuating the monitored TG from the danger area. The first robot device 100 grips the monitored TG by the operation unit 16 and executes a process of carrying the monitored TG to a position away from the danger area. Further, the first robot device 100 instructs the second robot device 200 to cause the second robot device 200 to perform an action that attracts the attention of the monitored TG, and moves the monitored TG to a position away from the danger area. You may.
 また、情報処理システム1は、オブジェクトOB7が配置された領域を、危険エリアとして、監視対象TGの行動に起因する危険を判定してもよい。例えば、オブジェクトOB7は、高価な物品であり、監視対象TGの保護者等が破損回避を指定した物品であるものとする。情報処理システム1は、オブジェクトOB7が配置された領域を危険エリアとして、監視対象TG以外に及ぶ危険を判定する。例えば、情報処理システム1は、オブジェクトOB7の位置やオブジェクトOB7の位置から所定の範囲内に位置する場合、監視対象TGによりオブジェクトOB7の破損の危険があると判定する。 Further, the information processing system 1 may determine the danger caused by the behavior of the monitored TG, using the area where the object OB7 is arranged as a danger area. For example, it is assumed that the object OB7 is an expensive article, and the guardian or the like of the monitored TG has designated the avoidance of damage. The information processing system 1 determines the danger that extends to areas other than the monitored TG, with the area where the object OB7 is arranged as a danger area. For example, when the information processing system 1 is located within a predetermined range from the position of the object OB7 or the position of the object OB7, the information processing system 1 determines that there is a risk of damage to the object OB7 by the monitored TG.
 情報処理システム1は、オブジェクトOB7の破損の危険があると判定した場合、オブジェクトOB7の破損の危険の発生を回避するための処理を実行する。例えば、第1ロボット装置100は、オブジェクトOB7の破損の危険があると判定した場合、操作部16により監視対象TGを危険エリアから退避させる処理を実行する。第1ロボット装置100は、操作部16により監視対象TGを把持し、監視対象TGを危険エリア(オブジェクトOB7の位置)から離れた位置に運ぶ処理を実行する。また、第1ロボット装置100は、第2ロボット装置200に指示し、第2ロボット装置200に監視対象TGの気を引く行動を行わせて、監視対象TGを危険エリア(オブジェクトOB7の位置)から離れた位置に移動させてもよい。 When the information processing system 1 determines that there is a risk of damage to the object OB7, the information processing system 1 executes a process for avoiding the risk of damage to the object OB7. For example, when the first robot device 100 determines that there is a risk of damage to the object OB 7, the operation unit 16 executes a process of evacuating the monitored TG from the danger area. The first robot device 100 grips the monitored TG by the operation unit 16 and executes a process of carrying the monitored TG to a position away from the danger area (the position of the object OB7). Further, the first robot device 100 instructs the second robot device 200 to make the second robot device 200 perform an action that attracts the attention of the monitored TG, and moves the monitored TG from the danger area (position of the object OB7). It may be moved to a distant position.
 情報処理システム1は、オブジェクトに接触することを危険判定条件として、監視対象の行動に起因する危険を判定してもよい。情報処理システム1は、監視対象がオブジェクトを把持した場合や、監視対象がオブジェクトに所定時間(例えば10秒等)以上接触した場合に危険判定条件を満たすとして、危険が発生する可能性があると判定してもよい。 The information processing system 1 may determine the danger caused by the behavior of the monitored object on the condition that the contact with the object is a danger determination condition. The information processing system 1 considers that a danger may occur if the monitoring target holds the object or if the monitoring target contacts the object for a predetermined time (for example, 10 seconds or the like) or more, the danger determination condition is satisfied. You may judge.
 図1の例では、情報処理システム1は、空間SP内のコンセント(図示省略)への監視対象TGの接触に応じて、監視対象TGの行動に起因する危険を判定してもよい。情報処理システム1は、監視対象TGが接触することで監視対象TGがやけどをする可能性があるコンセントへの接触を条件として、監視対象TGのやけどの危険を判定する。例えば、情報処理システム1は、監視対象TGがコンセントに接触した場合、監視対象TGのやけどの危険があると判定する。情報処理システム1は、監視対象TGがコンセントに接続されたプラグを把持した場合、監視対象TGのやけどの危険があると判定してもよい。また、情報処理システム1は、コンセントの位置から所定の範囲内に位置する場合、監視対象TGのやけどの危険があると判定してもよい。 In the example of FIG. 1, the information processing system 1 may determine the danger caused by the behavior of the monitored TG according to the contact of the monitored TG with the outlet (not shown) in the space SP. The information processing system 1 determines the risk of burns on the monitored TG on condition that the monitored TG comes into contact with the outlet, which may cause burns. For example, the information processing system 1 determines that there is a risk of burns to the monitored TG when the monitored TG comes into contact with the outlet. The information processing system 1 may determine that there is a risk of burns to the monitored TG when the monitored TG grips the plug connected to the outlet. Further, when the information processing system 1 is located within a predetermined range from the position of the outlet, it may be determined that there is a risk of burns of the monitored TG.
 情報処理システム1は、監視対象TGのやけどの危険があると判定した場合、やけどの危険の発生を回避するための処理を実行する。例えば、第1ロボット装置100は、監視対象TGのやけどの危険があると判定した場合、操作部16により監視対象TGの手等の接触部位を把持し、監視対象TGのコンセント等のオブジェクトとの接触を解除する処理を実行する。第1ロボット装置100は、操作部16により監視対象TGを把持し、監視対象TGをコンセント等のオブジェクトから離れた位置に運ぶ処理を実行してもよい。また、第1ロボット装置100は、第2ロボット装置200に指示し、第2ロボット装置200に監視対象TGの気を引く行動を行わせて、監視対象TGをコンセント等のオブジェクトから離れた位置に移動させてもよい。 When the information processing system 1 determines that there is a risk of burns on the monitored TG, it executes a process for avoiding the risk of burns. For example, when the first robot device 100 determines that there is a risk of burns to the monitored TG, the operation unit 16 grips a contact portion such as a hand of the monitored TG and connects with an object such as an outlet of the monitored TG. Execute the process of releasing the contact. The first robot device 100 may perform a process of grasping the monitored TG by the operation unit 16 and carrying the monitored TG to a position away from an object such as an outlet. Further, the first robot device 100 instructs the second robot device 200 to make the second robot device 200 perform an action that attracts the attention of the monitored TG so that the monitored TG is placed at a position away from an object such as an outlet. You may move it.
 情報処理システム1は、オブジェクトを把持し、かつそのオブジェクトが誤飲の危険があるもの(誤飲危険オブジェクト)であることを危険判定条件として、監視対象の誤飲の危険を判定してもよい。この場合、情報処理システム1は、オブジェクトが誤飲危険オブジェクトであるかどうかを示す情報(誤飲危険オブジェクト情報)を用いて、監視対象TGが把持するオブジェクトが誤飲危険オブジェクト情報であるかどうかを判定する。例えば、情報処理システム1は、オブジェクト情報記憶部122(図3参照)に記憶された誤飲危険オブジェクト情報を用いて、監視対象TGが把持するオブジェクトが誤飲危険オブジェクト情報であるかどうかを判定する。 The information processing system 1 may determine the risk of accidental ingestion of the monitored object on the condition that the object is grasped and the object has a risk of accidental ingestion (accidental ingestion risk object). .. In this case, the information processing system 1 uses information indicating whether or not the object is an accidental ingestion danger object (accidental ingestion danger object information), and whether or not the object grasped by the monitored TG is accidental ingestion danger object information. To judge. For example, the information processing system 1 uses the accidental ingestion risk object information stored in the object information storage unit 122 (see FIG. 3) to determine whether or not the object held by the monitored TG is accidental ingestion risk object information. To do.
 図1の例では、情報処理システム1は、空間SP内のたばこ(図示省略)の監視対象TGの把持に応じて、監視対象TGの行動に起因する危険を判定してもよい。情報処理システム1は、監視対象TGが誤飲する可能性があるたばこの把持を条件として、監視対象TGの誤飲の危険を判定する。例えば、情報処理システム1は、監視対象TGがたばこを把持した場合、監視対象TGの誤飲の危険があると判定する。情報処理システム1は、監視対象TGがたばこを一定時間以上視認(凝視)した場合、監視対象TGの誤飲の危険があると判定してもよい。また、情報処理システム1は、たばこの位置から所定の範囲内に位置する場合、監視対象TGの誤飲の危険があると判定してもよい。 In the example of FIG. 1, the information processing system 1 may determine the danger caused by the behavior of the monitored TG according to the gripping of the monitored TG of the cigarette (not shown) in the space SP. The information processing system 1 determines the risk of accidental ingestion of the monitored TG on condition that the monitored TG may accidentally swallow a cigarette. For example, the information processing system 1 determines that there is a risk of accidental ingestion of the monitored TG when the monitored TG holds a cigarette. The information processing system 1 may determine that there is a risk of accidental ingestion of the monitored TG when the monitored TG visually recognizes (stares) the cigarette for a certain period of time or longer. Further, the information processing system 1 may determine that there is a risk of accidental ingestion of the monitored TG when it is located within a predetermined range from the position of the cigarette.
 情報処理システム1は、監視対象TGの誤飲の危険があると判定した場合、誤飲の危険の発生を回避するための処理を実行する。例えば、第1ロボット装置100は、監視対象TGの誤飲の危険があると判定した場合、操作部16により監視対象TGの手を把持し、監視対象TGのたばこ等の誤飲危険オブジェクトの把持を解除する処理を実行する。第1ロボット装置100は、操作部16によりたばこ等の誤飲危険オブジェクトを把持し、監視対象TGから誤飲危険オブジェクトを取り上げる、すなわち監視対象TGの誤飲危険オブジェクトの把持を解除する処理を実行してもよい。 When the information processing system 1 determines that there is a risk of accidental ingestion of the monitored TG, it executes a process for avoiding the risk of accidental ingestion. For example, when the first robot device 100 determines that there is a risk of accidental ingestion of the monitored TG, the operation unit 16 grips the hand of the monitored TG and grasps an accidental ingestion risk object such as a cigarette of the monitored TG. Executes the process of canceling. The first robot device 100 grips an accidental ingestion danger object such as a cigarette by the operation unit 16 and picks up the accidental ingestion danger object from the monitored TG, that is, executes a process of releasing the grasp of the accidental ingestion danger object of the monitored TG. You may.
[1-1-3.大小ロボットの組合せのメリット等]
 上述したように、情報処理システム1は、環境全体を把握する大ロボット(第1ロボット装置100)と、監視対象を追尾し、監視対象の顔や手元等にフォーカスして情報を送る小ロボット(第2ロボット装置200)の組み合わせにより、処理を行う。情報処理システム1は、大ロボットと小ロボットとを組み合わせることにより、小ロボットが監視対象を探索して大ロボットに子供の位置を伝えることができる。また、情報処理システム1は、小ロボットにより、子供の手元を常に監視できる。また、情報処理システム1は、小ロボットが小型であるため、近くにいても存在感が希薄であり、監視対象が小ロボットを気にする可能性を小さくできる。また、情報処理システム1は、大ロボットの移動できない場所にも、小ロボットが移動して監視できる。また、情報処理システム1は、大ロボットにより、子供を移動させることができる。また、情報処理システム1は、大ロボットにより大規模な計算が可能である。また、情報処理システム1は、大ロボットが部屋全体を俯瞰することができる。
[1-1-3. Advantages of combining large and small robots, etc.]
As described above, the information processing system 1 includes a large robot (first robot device 100) that grasps the entire environment and a small robot (first robot device 100) that tracks the monitoring target and sends information focusing on the face and hands of the monitoring target. Processing is performed by the combination of the second robot device 200). In the information processing system 1, by combining the large robot and the small robot, the small robot can search for a monitoring target and inform the large robot of the position of the child. In addition, the information processing system 1 can constantly monitor the hands of a child by a small robot. Further, in the information processing system 1, since the small robot is small, its presence is weak even if it is nearby, and the possibility that the monitoring target cares about the small robot can be reduced. Further, in the information processing system 1, the small robot can move and monitor the place where the large robot cannot move. In addition, the information processing system 1 can move a child by a large robot. In addition, the information processing system 1 can perform large-scale calculations by a large robot. Further, in the information processing system 1, a large robot can take a bird's-eye view of the entire room.
 情報処理システム1は、子供の口元と環境全体を同時に把握することができる。例えば、情報処理システム1は、子供の口元が映る画像と机の上の花瓶を同時に把握することができる。また、情報処理システム1は、椅子の下にいる子供に回り込む必要がある場合であっても、小ロボット(第2ロボット装置200)により椅子の下にいる子供に回り込むことができる。 The information processing system 1 can grasp the child's mouth and the entire environment at the same time. For example, the information processing system 1 can simultaneously grasp an image showing a child's mouth and a vase on a desk. Further, the information processing system 1 can wrap around the child under the chair by the small robot (second robot device 200) even when it is necessary to wrap around the child under the chair.
 また、情報処理システム1は、机の上や棚の情報を見るために十分な高さが必要にある場合であっても、大ロボット(第1ロボット装置100)により机の上や棚の情報を検知することができる。また、情報処理システム1は、操作部16を有する大ロボット(第1ロボット装置100)により段差や椅子に上った監視対象を降ろすことができる。 Further, even if the information processing system 1 needs to have a sufficient height to see the information on the desk or the shelf, the information on the desk or the shelf by the large robot (first robot device 100) is used. Can be detected. In addition, the information processing system 1 can lower a monitored object that has climbed a step or a chair by a large robot (first robot device 100) having an operation unit 16.
 また、情報処理システム1は、赤ちゃんをロストした場合など、子供がどこにいるかを見つけられない場合であっても、大ロボット(第1ロボット装置100)と小ロボット(第2ロボット装置200)とにより適切に子供を探索することができる。 Further, even if the information processing system 1 cannot find out where the child is, such as when the baby is lost, the large robot (first robot device 100) and the small robot (second robot device 200) can be used. Can properly search for children.
 上述のように、情報処理システム1は、第1ロボット装置100や第2ロボット装置200等の移動するロボットを使うことで省スペースかつ親の行動を阻害せずに、監視対象の行動に起因する危険の発生を抑制することができる。また、情報処理システム1は、天井や壁や床等の空間SPの構造物にセンサを設けることなく、ロボットが自律的に移動して危険度マップを作成するため、赤ちゃんなどの監視対象の位置も同時に把握することができる。 As described above, the information processing system 1 is caused by the behavior of the monitored object by using a moving robot such as the first robot device 100 or the second robot device 200, which saves space and does not hinder the behavior of the parent. The occurrence of danger can be suppressed. Further, in the information processing system 1, the robot moves autonomously to create a risk map without providing a sensor in the structure of the space SP such as the ceiling, wall, and floor, so that the position of the monitoring target such as a baby is created. Can be grasped at the same time.
 また、情報処理システム1は、ベビーガードで対応できない危険物に関する危険の発生を抑制することができる。例えば、情報処理システム1は、動くものや小さな隙間などの危険物に関する危険の発生を抑制することができる。情報処理システム1は、第1ロボット装置100や第2ロボット装置200等のロボットが有するセンサのみを利用して、リアルタイムに危険度マップを更新して子供等の監視対象を危険から守ることができる。 In addition, the information processing system 1 can suppress the occurrence of danger related to dangerous materials that cannot be dealt with by the baby guard. For example, the information processing system 1 can suppress the occurrence of danger related to dangerous objects such as moving objects and small gaps. The information processing system 1 can update the risk map in real time to protect the monitored object such as a child from danger by using only the sensors possessed by the robots such as the first robot device 100 and the second robot device 200. ..
 したがって、情報処理システム1は、監視対象を監視する人間(大人等)が安心して監視対象(子供等)を残して外出することを可能にする。 Therefore, the information processing system 1 enables a person (adult, etc.) who monitors the monitored object to go out with peace of mind, leaving the monitored object (child, etc.).
[1-2.実施形態に係る情報処理システムの構成]
 図2に示す情報処理システム1について説明する。図2は、実施形態に係る情報処理システムの構成例を示す図である。図2に示すように、情報処理システム1は、第1ロボット装置100と、第2ロボット装置200とが含まれる。第2ロボット装置200及び第1ロボット装置100はネットワークNを介して、有線又は無線により通信可能に接続される。なお、図2に示した情報処理システム1には、複数台の第2ロボット装置200や、複数台の第1ロボット装置100が含まれてもよい。第1ロボット装置100と、第2ロボット装置200とは、Wi-Fi(登録商標)(Wireless Fidelity)やBluetooth(登録商標)等の無線通信機能により通信を行ってもよい。
[1-2. Configuration of information processing system according to the embodiment]
The information processing system 1 shown in FIG. 2 will be described. FIG. 2 is a diagram showing a configuration example of an information processing system according to an embodiment. As shown in FIG. 2, the information processing system 1 includes a first robot device 100 and a second robot device 200. The second robot device 200 and the first robot device 100 are communicably connected by wire or wirelessly via the network N. The information processing system 1 shown in FIG. 2 may include a plurality of second robot devices 200 and a plurality of first robot devices 100. The first robot device 100 and the second robot device 200 may communicate with each other by a wireless communication function such as Wi-Fi (registered trademark) (Wireless Fidelity) or Bluetooth (registered trademark).
 第1ロボット装置100は、移動手段や物体を操作する操作手段を有するロボットである。また、第1ロボット装置100は、各種の情報処理を行う情報処理装置である。第1ロボット装置100は、ネットワークNを介して第2ロボット装置200と通信し、第2ロボット装置200や各種センサが収集した情報を基に、第2ロボット装置200の制御の指示を行なったりする。また、第1ロボット装置100は、監視対象やオブジェクトに対する操作が可能な大型のロボット(大ロボット)である。 The first robot device 100 is a robot having an operating means for operating a moving means and an object. The first robot device 100 is an information processing device that performs various types of information processing. The first robot device 100 communicates with the second robot device 200 via the network N, and gives an instruction to control the second robot device 200 based on the information collected by the second robot device 200 and various sensors. .. Further, the first robot device 100 is a large robot (large robot) capable of operating a monitored object or an object.
 第1ロボット装置100は、自己位置推定機能を有する。第1ロボット装置100は、オブジェクト認識機能を有する。第1ロボット装置100は、花瓶などの物体(オブジェクト)を認識する機能を有する。第1ロボット装置100は、危険度マップ作成や更新機能を有する。第1ロボット装置100は、小ロボットである第2ロボット装置200の位置マッピング機能を有する。第1ロボット装置100は、第2ロボット装置200から送られた画像、もしくは第1ロボット装置100の撮影した画像内にある第2ロボット装置200の位置を推定する。例えば、第1ロボット装置100は、第2ロボット装置200から取得した画像や、画像センサ141により撮影した画像中の第2ロボット装置200を基に、第2ロボット装置200の位置を推定する。第1ロボット装置100は、に第2ロボット装置200から取得した画像を入力として、マップを作成する機能を有する。第1ロボット装置100は、子供を禁止領域から退避させる機能を有する。例えば、第1ロボット装置100は、アーム等の操作部16により、子供を禁止領域から退避させる機能を有する。なお、第1ロボット装置100は、音で監視対象の気を引く場合、音声を出力する出力部を有してもよい。また、例えば、第1ロボット装置100は、顔認識や人認識機能やオブジェクト認識機能を有する。 The first robot device 100 has a self-position estimation function. The first robot device 100 has an object recognition function. The first robot device 100 has a function of recognizing an object (object) such as a vase. The first robot device 100 has a risk map creation / update function. The first robot device 100 has a position mapping function of the second robot device 200, which is a small robot. The first robot device 100 estimates the position of the second robot device 200 in the image sent from the second robot device 200 or the image taken by the first robot device 100. For example, the first robot device 100 estimates the position of the second robot device 200 based on the image acquired from the second robot device 200 and the second robot device 200 in the image taken by the image sensor 141. The first robot device 100 has a function of creating a map by inputting an image acquired from the second robot device 200 into the first robot device 100. The first robot device 100 has a function of evacuating the child from the prohibited area. For example, the first robot device 100 has a function of retracting a child from a prohibited area by an operation unit 16 such as an arm. The first robot device 100 may have an output unit that outputs voice when the sound attracts the attention of the monitored object. Further, for example, the first robot device 100 has a face recognition function, a person recognition function, and an object recognition function.
 第2ロボット装置200は、移動手段や物体を有し、監視対象を追尾するロボットである。第2ロボット装置200は、大ロボットである第1ロボット装置100よりもサイズが小さい小型のロボット(小ロボット)である。また、第2ロボット装置200は、各種の情報処理を行う情報処理装置である。第2ロボット装置200は、ネットワークNを介して第1ロボット装置100と通信し、第1ロボット装置100に情報を送信する。 The second robot device 200 is a robot that has a moving means and an object and tracks a monitored object. The second robot device 200 is a small robot (small robot) whose size is smaller than that of the first robot device 100, which is a large robot. The second robot device 200 is an information processing device that performs various types of information processing. The second robot device 200 communicates with the first robot device 100 via the network N, and transmits information to the first robot device 100.
 例えば、第2ロボット装置200は、顔認識や人認識機能やオブジェクト認識機能を有する。例えば、第2ロボット装置200は、監視対象の手元にあるオブジェクトを認識する機能を有する。第2ロボット装置200は、画像を検知する。第2ロボット装置200は、位置マッピングやマップ作成用の画像を検知する。 For example, the second robot device 200 has a face recognition function, a person recognition function, and an object recognition function. For example, the second robot device 200 has a function of recognizing an object at hand to be monitored. The second robot device 200 detects an image. The second robot device 200 detects an image for position mapping and map creation.
 第2ロボット装置200は、第1ロボット装置100に画像を送信する。第2ロボット装置200は、子供等の監視対象を追尾する。第2ロボット装置200は、危険度マップを記憶部22に記憶しなくてもよい。第2ロボット装置200は、子供等の監視対象の正面を撮像するように、監視対象に回り込む。第2ロボット装置200は、アラート機能や移動機能を有する。 The second robot device 200 transmits an image to the first robot device 100. The second robot device 200 tracks a monitoring target such as a child. The second robot device 200 does not have to store the risk map in the storage unit 22. The second robot device 200 wraps around the monitored object so as to image the front of the monitored object such as a child. The second robot device 200 has an alert function and a movement function.
[1-3.実施形態に係る第1ロボット装置の構成]
 次に、実施形態に係る情報処理を実行する第1ロボット装置100の構成について説明する。図3は、実施形態に係る第1ロボット装置の構成例を示す図である。
[1-3. Configuration of the first robot device according to the embodiment]
Next, the configuration of the first robot device 100 that executes the information processing according to the embodiment will be described. FIG. 3 is a diagram showing a configuration example of the first robot device according to the embodiment.
 図3に示すように、第1ロボット装置100は、通信部11と、記憶部12と、制御部13と、センサ部14と、移動部15と、操作部16とを有する。このように、第1ロボット装置100は、移動手段である移動部15を有する。また、第1ロボット装置100は、物体を操作する操作手段である操作部16を有する。 As shown in FIG. 3, the first robot device 100 includes a communication unit 11, a storage unit 12, a control unit 13, a sensor unit 14, a moving unit 15, and an operating unit 16. As described above, the first robot device 100 has a moving unit 15 which is a moving means. Further, the first robot device 100 has an operation unit 16 which is an operation means for operating an object.
 通信部11は、例えば、NIC(Network Interface Card)や通信回路等によって実現される。通信部11は、ネットワークN(インターネット等)と有線又は無線で接続され、ネットワークNを介して、他の装置等との間で情報の送受信を行う。 The communication unit 11 is realized by, for example, a NIC (Network Interface Card), a communication circuit, or the like. The communication unit 11 is connected to the network N (Internet, etc.) by wire or wirelessly, and transmits / receives information to / from other devices via the network N.
 記憶部12は、例えば、RAM(Random Access Memory)、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。記憶部12は、マップ情報記憶部121と、オブジェクト情報記憶部122と、危険判定用情報記憶部123とを有する。 The storage unit 12 is realized by, for example, a semiconductor memory element such as a RAM (Random Access Memory) or a flash memory (Flash Memory), or a storage device such as a hard disk or an optical disk. The storage unit 12 includes a map information storage unit 121, an object information storage unit 122, and a danger determination information storage unit 123.
 マップ情報記憶部121は、マップ(地図)に関する各種情報を記憶する。例えば、マップ情報記憶部121は、危険度マップを記憶する。マップ情報記憶部121は、第2ロボット装置200が検知した情報に基づく危険度マップを記憶する。例えば、マップ情報記憶部121は、監視対象の位置や、第1ロボット装置100の位置や、第2ロボット装置200の位置を含む危険度マップを記憶する。例えば、マップ情報記憶部121は、監視対象や、第1ロボット装置100や、第2ロボット装置200をマッピングした危険度マップを記憶する。例えば、マップ情報記憶部121は、3次元の危険度マップを記憶する。 The map information storage unit 121 stores various information related to the map (map). For example, the map information storage unit 121 stores the risk level map. The map information storage unit 121 stores a risk map based on the information detected by the second robot device 200. For example, the map information storage unit 121 stores a risk map including the position of the monitoring target, the position of the first robot device 100, and the position of the second robot device 200. For example, the map information storage unit 121 stores a monitoring target, a risk map that maps the first robot device 100 and the second robot device 200. For example, the map information storage unit 121 stores a three-dimensional risk map.
 例えば、マップ情報記憶部121は、危険度マップMP1等の情報を記憶する。例えば、マップ情報記憶部121は、監視対象TGや、第1ロボット装置100や、第2ロボット装置200をマッピングした危険度マップMP1を記憶する。例えば、マップ情報記憶部121は、2次元の危険度マップを記憶してもよい。例えば、マップ情報記憶部121は、占有格子地図を記憶してもよい。 For example, the map information storage unit 121 stores information such as the risk map MP1. For example, the map information storage unit 121 stores the monitoring target TG, the first robot device 100, and the risk map MP1 that maps the second robot device 200. For example, the map information storage unit 121 may store a two-dimensional risk map. For example, the map information storage unit 121 may store the occupied grid map.
 オブジェクト情報記憶部122は、物体(オブジェクト)に関する各種情報を記憶する。オブジェクト情報記憶部122は、各オブジェクトを識別する情報に、そのオブジェクトの情報を対応付けて記憶する。オブジェクト情報記憶部122は、オブジェクトの属性に関する情報を記憶する。オブジェクト情報記憶部122は、オブジェクトのサイズを記憶する。 The object information storage unit 122 stores various information related to the object (object). The object information storage unit 122 stores the information of the object in association with the information that identifies each object. The object information storage unit 122 stores information related to the attributes of the object. The object information storage unit 122 stores the size of the object.
 オブジェクト情報記憶部122は、物体(オブジェクト)の位置を示す情報を記憶する。オブジェクト情報記憶部122は、オブジェクトが占める領域(エリア)を示す情報を記憶する。オブジェクト情報記憶部122は、オブジェクトが位置の変更が可能かを示す情報を記憶する。 The object information storage unit 122 stores information indicating the position of the object (object). The object information storage unit 122 stores information indicating an area occupied by the object. The object information storage unit 122 stores information indicating whether the position of the object can be changed.
 オブジェクト情報記憶部122は、物体(オブジェクト)の危険を示す情報を記憶する。オブジェクト情報記憶部122は、監視対象がオブジェクトに接触することが危険であるかを示す情報を記憶する。例えば、オブジェクト情報記憶部122は、監視対象が接触することが危険であるオブジェクトを接触危険オブジェクトとして記憶する。オブジェクト情報記憶部122は、監視対象がオブジェクトを把持することが危険であるかを示す情報を記憶する。例えば、オブジェクト情報記憶部122は、監視対象が把持することが危険であるオブジェクトを接触危険オブジェクトとして記憶する。 The object information storage unit 122 stores information indicating the danger of an object (object). The object information storage unit 122 stores information indicating whether it is dangerous for the monitored object to come into contact with the object. For example, the object information storage unit 122 stores an object that is dangerous to be touched by the monitored object as a contact danger object. The object information storage unit 122 stores information indicating whether it is dangerous for the monitored object to grasp the object. For example, the object information storage unit 122 stores an object that is dangerous to be grasped by the monitored object as a contact danger object.
 オブジェクト情報記憶部122は、監視対象がオブジェクトを誤飲する危険があるかを示す情報を記憶する。オブジェクト情報記憶部122は、監視対象がオブジェクトを口に入れることが危険であるかを示す情報を記憶する。例えば、オブジェクト情報記憶部122は、監視対象が誤飲する可能性があるオブジェクトを誤飲危険オブジェクトとして記憶する。 The object information storage unit 122 stores information indicating whether the monitored object has a risk of accidentally swallowing the object. The object information storage unit 122 stores information indicating whether it is dangerous for the monitored object to put the object in the mouth. For example, the object information storage unit 122 stores an object that may be accidentally swallowed by the monitored object as a risk of accidental ingestion object.
 オブジェクト情報記憶部122は、物体(オブジェクト)の価値を示す情報を記憶する。オブジェクト情報記憶部122は、オブジェクトが高価なものであるかを示す情報を記憶する。オブジェクト情報記憶部122は、監視対象がオブジェクトに接近することが危険であるかを示す情報を記憶する。 The object information storage unit 122 stores information indicating the value of an object (object). The object information storage unit 122 stores information indicating whether the object is expensive. The object information storage unit 122 stores information indicating whether it is dangerous for the monitored object to approach the object.
 危険判定用情報記憶部123は、危険判定に関する各種情報を記憶する。危険判定用情報記憶部123は、危険を判定する条件を記憶する。危険判定用情報記憶部123は、各危険を識別する情報に、その危険を判定する条件(危険判定条件)を対応付けて記憶する。 The danger determination information storage unit 123 stores various information related to the danger determination. The danger determination information storage unit 123 stores the conditions for determining the danger. The danger determination information storage unit 123 stores the information for identifying each danger in association with the condition for determining the danger (danger determination condition).
 危険判定用情報記憶部123は、危険なエリア(危険エリア)に位置するかどうかを判定する条件を記憶する。危険判定用情報記憶部123は、危険エリアを示す情報を危険エリアの侵入の危険判定条件として記憶する。危険判定用情報記憶部123は、危険エリア内に監視対象が位置することを危険エリアの侵入の危険判定条件として記憶する。危険判定用情報記憶部123は、危険エリアから所定の範囲(例えば危険エリアから50cm等)内に監視対象が位置することを危険エリアの侵入の危険判定条件として記憶する。 The danger determination information storage unit 123 stores conditions for determining whether or not the information is located in a dangerous area (danger area). The danger determination information storage unit 123 stores information indicating the danger area as a danger determination condition for intrusion of the danger area. The danger determination information storage unit 123 stores that the monitoring target is located in the danger area as a danger determination condition for intrusion into the danger area. The danger determination information storage unit 123 stores that the monitoring target is located within a predetermined range (for example, 50 cm from the danger area) from the danger area as a danger determination condition for intrusion of the danger area.
 危険判定用情報記憶部123は、接触が危険であるオブジェクトに監視対象が接触するかどうかを判定する条件を記憶する。危険判定用情報記憶部123は、接触が危険であるオブジェクトに監視対象が接触することを接触の危険判定条件として記憶する。危険判定用情報記憶部123は、監視対象がオブジェクトに接触し、そのオブジェクトが接触危険オブジェクトであることを接触の危険判定条件として記憶する。 The danger determination information storage unit 123 stores the conditions for determining whether or not the monitored object comes into contact with an object whose contact is dangerous. The danger determination information storage unit 123 stores that the monitored object comes into contact with an object whose contact is dangerous as a contact danger determination condition. The danger determination information storage unit 123 stores that the monitored object comes into contact with an object and that the object is a contact danger object as a contact danger determination condition.
 危険判定用情報記憶部123は、把持が危険であるオブジェクトに監視対象が把持するかどうかを判定する条件を記憶する。危険判定用情報記憶部123は、把持が危険であるオブジェクトに監視対象が把持することを把持の危険判定条件として記憶する。危険判定用情報記憶部123は、監視対象がオブジェクトに把持し、そのオブジェクトが把持危険オブジェクトであることを把持の危険判定条件として記憶する。 The danger determination information storage unit 123 stores the conditions for determining whether or not the monitored object grips an object whose grip is dangerous. The danger determination information storage unit 123 stores that the monitored object grips an object whose grip is dangerous as a gripping danger determination condition. The danger determination information storage unit 123 stores that the monitored object is gripped by an object and that the object is a gripping danger object as a gripping danger determination condition.
 危険判定用情報記憶部123は、監視対象が物体(オブジェクト)を誤飲するかどうかを判定する条件を記憶する。危険判定用情報記憶部123は、監視対象が手にオブジェクトを把持し、かつそのオブジェクトが誤飲の危険があるものであることを誤飲の危険判定条件として記憶する。例えば、危険判定用情報記憶部123は、監視対象が手にオブジェクトを把持し、かつそのオブジェクトが誤飲危険オブジェクトであることを誤飲の危険判定条件として記憶する。 The danger determination information storage unit 123 stores conditions for determining whether or not the monitored object accidentally swallows an object. The danger determination information storage unit 123 stores as a risk determination condition for accidental ingestion that the monitored object holds an object in the hand and the object has a risk of accidental ingestion. For example, the danger determination information storage unit 123 stores that the monitored object holds an object in the hand and that the object is an accidental ingestion risk object as a risk determination condition for accidental ingestion.
 なお、記憶部12は、マップ情報記憶部121と、オブジェクト情報記憶部122と、危険判定用情報記憶部123に限らず、各種の情報が記憶される。記憶部12は、操作部16に関する各種情報を記憶してもよい。例えば、記憶部12は、操作部16の数や操作部16の設置位置を示す情報を記憶してもよい。例えば、記憶部12は、物体(オブジェクト)の特定(推定)に用いる各種情報を記憶してもよい。 The storage unit 12 is not limited to the map information storage unit 121, the object information storage unit 122, and the danger determination information storage unit 123, and various types of information are stored. The storage unit 12 may store various information related to the operation unit 16. For example, the storage unit 12 may store information indicating the number of operation units 16 and the installation position of the operation unit 16. For example, the storage unit 12 may store various types of information used for identifying (estimating) an object.
 制御部13は、例えば、CPU(Central Processing Unit)やMPU(Micro Processing Unit)等によって、第1ロボット装置100内部に記憶されたプログラム(例えば、本開示に係る情報処理プログラム)がRAM(Random Access Memory)等を作業領域として実行されることにより実現される。また、制御部13は、例えば、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)等の集積回路により実現されてもよい。 In the control unit 13, for example, a program (for example, an information processing program according to the present disclosure) stored in the first robot device 100 by a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or the like is a RAM (Random Access). It is realized by executing Memory) etc. as a work area. Further, the control unit 13 may be realized by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
 図3に示すように、制御部13は、取得部131と、認識部132と、生成部133と、推定部134と、判定部135と、計画部136と、実行部137とを有し、以下に説明する情報処理の機能や作用を実現または実行する。なお、制御部13の内部構成は、図3に示した構成に限られず、後述する情報処理を行う構成であれば他の構成であってもよい。 As shown in FIG. 3, the control unit 13 includes an acquisition unit 131, a recognition unit 132, a generation unit 133, an estimation unit 134, a determination unit 135, a planning unit 136, and an execution unit 137. Realize or execute the information processing functions and actions described below. The internal configuration of the control unit 13 is not limited to the configuration shown in FIG. 3, and may be another configuration as long as it is a configuration for performing information processing described later.
 取得部131は、各種情報を取得する。取得部131は、外部の情報処理装置から各種情報を取得する。取得部131は、外部の情報処理装置から各種情報を受信する。取得部131は、第2ロボット装置200から各種情報を受信する。取得部131は、記憶部12から各種情報を取得する。取得部131は、マップ情報記憶部121やオブジェクト情報記憶部122や危険判定用情報記憶部123から各種情報を取得する。取得部131は、認識部132や、生成部133や、推定部134や、判定部135や、計画部136から情報を取得する。取得部131は、取得した情報を記憶部12に格納する。 The acquisition unit 131 acquires various information. The acquisition unit 131 acquires various information from an external information processing device. The acquisition unit 131 receives various information from an external information processing device. The acquisition unit 131 receives various information from the second robot device 200. The acquisition unit 131 acquires various information from the storage unit 12. The acquisition unit 131 acquires various information from the map information storage unit 121, the object information storage unit 122, and the danger determination information storage unit 123. The acquisition unit 131 acquires information from the recognition unit 132, the generation unit 133, the estimation unit 134, the determination unit 135, and the planning unit 136. The acquisition unit 131 stores the acquired information in the storage unit 12.
 取得部131は、センサ部14により検知されたセンサ情報を取得する。取得部131は、画像センサ141によって検知されるセンサ情報(画像情報)を取得する。取得部131は、画像センサ141により撮像された画像情報(画像)を取得する。取得部131は、第2ロボット装置200から監視対象の画像を受信する。取得部131は、アラートが発行されたことを示す情報を第2ロボット装置200から受信する。 The acquisition unit 131 acquires the sensor information detected by the sensor unit 14. The acquisition unit 131 acquires the sensor information (image information) detected by the image sensor 141. The acquisition unit 131 acquires the image information (image) captured by the image sensor 141. The acquisition unit 131 receives the image to be monitored from the second robot device 200. The acquisition unit 131 receives information indicating that the alert has been issued from the second robot device 200.
 認識部132は、各種情報を認識する。認識部132は、各種情報を解析する。認識部132は、画像情報を解析する。認識部132は、外部の情報処理装置からの情報や記憶部12に記憶された情報に基づいて、画像情報から各種情報を解析する。認識部132は、画像情報から各種情報を特定する。認識部132は、画像情報から各種情報を抽出する。認識部132は、解析結果に基づく認識を行う。認識部132は、解析結果に基づいて、種々の情報を認識する。 The recognition unit 132 recognizes various types of information. The recognition unit 132 analyzes various information. The recognition unit 132 analyzes the image information. The recognition unit 132 analyzes various information from the image information based on the information from the external information processing device and the information stored in the storage unit 12. The recognition unit 132 identifies various types of information from the image information. The recognition unit 132 extracts various information from the image information. The recognition unit 132 performs recognition based on the analysis result. The recognition unit 132 recognizes various information based on the analysis result.
 認識部132は、画像に関する解析処理を行う。認識部132は、画像処理に関する各種処理を行う。認識部132は、取得部131により取得された画像情報(画像)に対して処理を行う。認識部132は、第2ロボット装置200が撮像した画像情報(画像)に対して処理を行う。認識部132は、画像センサ141により撮像された画像情報(画像)に対して処理を行う。認識部132は、画像処理に関する技術を適宜用いて、画像に対する処理を行う。 The recognition unit 132 performs analysis processing related to the image. The recognition unit 132 performs various processes related to image processing. The recognition unit 132 processes the image information (image) acquired by the acquisition unit 131. The recognition unit 132 processes the image information (image) captured by the second robot device 200. The recognition unit 132 processes the image information (image) captured by the image sensor 141. The recognition unit 132 performs processing on the image by appropriately using a technique related to image processing.
 認識部132は、オブジェクトを認識する。認識部132は、一般物体認識等の物体認識に関する種々の技術を適宜用いて、画像センサ141が検知した画像や第2ロボット装置200から取得した画像中に含まれる各物体を認識する。認識部132は、画像中の人を認識する。認識部132は、画像中の人の顔を認識する。認識部132は、顔認識機能により、監視対象の顔を認識する。認識部132は、人認識機能により、監視対象を人として認識する。 The recognition unit 132 recognizes the object. The recognition unit 132 recognizes each object included in the image detected by the image sensor 141 and the image acquired from the second robot device 200 by appropriately using various techniques related to object recognition such as general object recognition. The recognition unit 132 recognizes a person in the image. The recognition unit 132 recognizes a person's face in the image. The recognition unit 132 recognizes the face to be monitored by the face recognition function. The recognition unit 132 recognizes the monitored target as a person by the person recognition function.
 生成部133は、各種情報を生成する。生成部133は、外部の情報処理装置からの情報や記憶部12に記憶された情報に基づいて、各種情報を生成する。生成部133は、第2ロボット装置200からの情報に基づいて、各種情報を生成する。生成部133は、記憶部12に記憶された情報に基づいて、各種情報を生成する。 Generation unit 133 generates various information. The generation unit 133 generates various information based on the information from the external information processing device and the information stored in the storage unit 12. The generation unit 133 generates various information based on the information from the second robot device 200. The generation unit 133 generates various information based on the information stored in the storage unit 12.
 生成部133は、取得部131により取得された各種情報に基づいて、各種情報を生成する。生成部133は、認識部132により認識された各種情報に基づいて、各種情報を生成する。生成部133は、推定部134により推定された各種情報に基づいて、各種情報を生成する。生成部133は、判定部135により判定された各種情報に基づいて、各種情報を生成する。 The generation unit 133 generates various information based on the various information acquired by the acquisition unit 131. The generation unit 133 generates various information based on the various information recognized by the recognition unit 132. The generation unit 133 generates various information based on the various information estimated by the estimation unit 134. The generation unit 133 generates various information based on various information determined by the determination unit 135.
 生成部133は、各種の分類情報を生成する。生成部133は、各種の分類を行う。生成部133は、各種情報を分類する。生成部133は、取得部131により取得された情報に基づいて、分類処理を行う。生成部133は、取得部131により取得された情報を分類する。生成部133は、記憶部12に記憶された情報に基づいて、分類処理を行う。 Generation unit 133 generates various classification information. The generation unit 133 performs various classifications. The generation unit 133 classifies various types of information. The generation unit 133 performs the classification process based on the information acquired by the acquisition unit 131. The generation unit 133 classifies the information acquired by the acquisition unit 131. The generation unit 133 performs the classification process based on the information stored in the storage unit 12.
 生成部133は、取得部131により取得された情報に基づいて、各種分類を行う。生成部133は、センサ部14により検知された各種のセンサ情報を用いて、各種分類を行う。生成部133は、画像センサ141によって検知されるセンサ情報を用いて、各種分類を行う。 The generation unit 133 performs various classifications based on the information acquired by the acquisition unit 131. The generation unit 133 performs various classifications using various sensor information detected by the sensor unit 14. The generation unit 133 performs various classifications using the sensor information detected by the image sensor 141.
 生成部133は、危険に関連する危険度マップを生成する。生成部133は、第2ロボット装置200の位置をマッピングする。 Generation unit 133 generates a risk map related to danger. The generation unit 133 maps the position of the second robot device 200.
 推定部134は、各種情報を推定する。推定部134は、外部の情報処理装置から取得された情報に基づいて、各種情報を推定する。推定部134は、記憶部12に記憶された情報に基づいて、各種情報を推定する。推定部134は、認識部132による認識処理の結果に基づいて、各種情報を推定する。 The estimation unit 134 estimates various information. The estimation unit 134 estimates various types of information based on the information acquired from the external information processing device. The estimation unit 134 estimates various types of information based on the information stored in the storage unit 12. The estimation unit 134 estimates various information based on the result of the recognition process by the recognition unit 132.
 推定部134は、各種情報を予測する。推定部134は、外部の情報処理装置から取得された情報に基づいて、各種情報を予測する。推定部134は、記憶部12に記憶された情報に基づいて、各種情報を予測する。推定部134は、認識部132による認識処理の結果に基づいて、各種情報を予測する。 The estimation unit 134 predicts various types of information. The estimation unit 134 predicts various types of information based on the information acquired from the external information processing device. The estimation unit 134 predicts various types of information based on the information stored in the storage unit 12. The estimation unit 134 predicts various information based on the result of the recognition process by the recognition unit 132.
 推定部134は、取得部131により取得された情報に基づいて、各種推定を行う。推定部134は、センサ部14により検知された各種のセンサ情報を用いて、各種推定を行う。推定部134は、画像センサ141によって検知されるセンサ情報を用いて、各種推定を行う。推定部134は、第2ロボット装置200が検知したセンサ情報を用いて、各種推定を行う。推定部134は、取得部131により取得された情報に基づいて、各種予測を行う。推定部134は、センサ部14により検知された各種のセンサ情報を用いて、各種予測を行う。推定部134は、画像センサ141によって検知されるセンサ情報を用いて、各種予測を行う。推定部134は、第2ロボット装置200が検知したセンサ情報を用いて、各種予測を行う。 The estimation unit 134 performs various estimations based on the information acquired by the acquisition unit 131. The estimation unit 134 performs various estimations using various sensor information detected by the sensor unit 14. The estimation unit 134 performs various estimations using the sensor information detected by the image sensor 141. The estimation unit 134 performs various estimations using the sensor information detected by the second robot device 200. The estimation unit 134 makes various predictions based on the information acquired by the acquisition unit 131. The estimation unit 134 makes various predictions using various sensor information detected by the sensor unit 14. The estimation unit 134 makes various predictions using the sensor information detected by the image sensor 141. The estimation unit 134 makes various predictions using the sensor information detected by the second robot device 200.
 推定部134は、取得部131により取得された画像情報に基づいて、推定処理を行う。推定部134は、第2ロボット装置200から受信された画像情報に基づいて、推定処理を行う。推定部134は、自己位置を推定する。また、推定部134は、第2ロボット装置200の位置を推定する。推定部134は、第2ロボット装置200から取得した画像に基づいて、第2ロボット装置200の位置を推定する。 The estimation unit 134 performs estimation processing based on the image information acquired by the acquisition unit 131. The estimation unit 134 performs estimation processing based on the image information received from the second robot device 200. The estimation unit 134 estimates the self-position. Further, the estimation unit 134 estimates the position of the second robot device 200. The estimation unit 134 estimates the position of the second robot device 200 based on the image acquired from the second robot device 200.
 判定部135は、各種情報を判定する。判定部135は、各種情報を決定する。判定部135は、各種情報を特定する。判定部135は、外部の情報処理装置から取得された情報に基づいて、各種情報を判定する。判定部135は、記憶部12に記憶された情報に基づいて、各種情報を判定する。 The determination unit 135 determines various information. The determination unit 135 determines various information. The determination unit 135 specifies various types of information. The determination unit 135 determines various types of information based on the information acquired from the external information processing device. The determination unit 135 determines various types of information based on the information stored in the storage unit 12.
 判定部135は、取得部131により取得された情報に基づいて、各種判定を行う。判定部135は、センサ部14により検知された各種のセンサ情報を用いて、各種判定を行う。判定部135は、画像センサ141によって検知されるセンサ情報を用いて、各種判定を行う。判定部135は、認識部132による認識処理の結果に基づいて、各種情報を判定する。判定部135は、推定部134による推定処理の結果に基づいて、各種情報を判定する。判定部135は、推定部134による予測処理の結果に基づいて、各種情報を判定する。 The determination unit 135 makes various determinations based on the information acquired by the acquisition unit 131. The determination unit 135 makes various determinations using various sensor information detected by the sensor unit 14. The determination unit 135 makes various determinations using the sensor information detected by the image sensor 141. The determination unit 135 determines various information based on the result of the recognition process by the recognition unit 132. The determination unit 135 determines various information based on the result of the estimation process by the estimation unit 134. The determination unit 135 determines various information based on the result of the prediction process by the estimation unit 134.
 判定部135は、取得部131により第2ロボット装置200から受信した監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。判定部135は、子どもまたはペットである監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。判定部135は、屋内の居住環境に位置する監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。 The determination unit 135 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target received from the second robot device 200 by the acquisition unit 131. The determination unit 135 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target that is a child or a pet. The determination unit 135 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target located in the indoor living environment.
 判定部135は、監視対象の行動に起因して監視対象に及ぶ危険を判定する。判定部135は、監視対象の行動に起因して監視対象以外に及ぶ危険を判定する。判定部135は、監視対象の行動に起因して監視対象以外の物体に及ぶ危険を判定する。判定部135は、監視対象の物体に対する行動に起因する危険を判定する。判定部135は、監視対象の物体への接触に起因する危険を判定する。判定部135は、監視対象の物体の把持に起因する危険を判定する。 Judgment unit 135 determines the danger to the monitored target due to the behavior of the monitored target. The determination unit 135 determines the danger to the non-monitored target due to the behavior of the monitored target. The determination unit 135 determines the danger of reaching an object other than the monitored object due to the behavior of the monitored object. The determination unit 135 determines the danger caused by the action on the object to be monitored. The determination unit 135 determines the danger caused by the contact with the object to be monitored. The determination unit 135 determines the danger caused by gripping the object to be monitored.
 判定部135は、監視対象の位置の移動に起因する危険を判定する。判定部135は、危険の発生が予測されるエリアへの監視対象の侵入に起因する危険を判定する。判定部135は、複数の第2ロボット装置200の各々から受信した監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。 The determination unit 135 determines the danger caused by the movement of the position to be monitored. The determination unit 135 determines the danger caused by the intrusion of the monitored object into the area where the occurrence of the danger is predicted. The determination unit 135 determines the danger caused by the behavior of the monitoring target based on the images of the monitoring target received from each of the plurality of second robot devices 200.
 計画部136は、各種計画を行う。計画部136は、行動計画に関する各種情報を生成する。計画部136は、取得部131により取得された情報に基づいて、各種計画を行う。計画部136は、推定部134による推定結果に基づいて、各種計画を行う。計画部136は、推定部134による予測結果に基づいて、各種計画を行う。計画部136は、判定部135による判定結果に基づいて、各種計画を行う。計画部136は、行動計画に関する種々の技術を用いて、行動計画を行う。計画部136は、第2ロボット装置200の行動計画を行う。 Planning department 136 makes various plans. The planning unit 136 generates various information regarding the action plan. The planning unit 136 makes various plans based on the information acquired by the acquisition unit 131. The planning unit 136 makes various plans based on the estimation result by the estimation unit 134. The planning unit 136 makes various plans based on the prediction result by the estimation unit 134. The planning unit 136 makes various plans based on the determination result by the determination unit 135. The planning unit 136 makes an action plan by using various techniques related to the action plan. The planning unit 136 makes an action plan for the second robot device 200.
 実行部137は、各種処理を実行する。実行部137は、外部の情報処理装置からの情報に基づいて、各種処理を実行する。実行部137は、記憶部12に記憶された情報に基づいて、各種処理を実行する。実行部137は、マップ情報記憶部121やオブジェクト情報記憶部122や危険判定用情報記憶部123に記憶された情報に基づいて、各種処理を実行する。実行部137は、取得部131により取得された情報に基づいて、各種処理を実行する。実行部137は、操作部16の操作を制御する操作制御部として機能する。 Execution unit 137 executes various processes. The execution unit 137 executes various processes based on information from an external information processing device. The execution unit 137 executes various processes based on the information stored in the storage unit 12. The execution unit 137 executes various processes based on the information stored in the map information storage unit 121, the object information storage unit 122, and the danger determination information storage unit 123. The execution unit 137 executes various processes based on the information acquired by the acquisition unit 131. The execution unit 137 functions as an operation control unit that controls the operation of the operation unit 16.
 実行部137は、推定部134による推定結果に基づいて、各種処理を実行する。実行部137は、推定部134による予測結果に基づいて、各種処理を実行する。実行部137は、判定部135による判定結果に基づいて、各種処理を実行する。実行部137は、計画部136による行動計画に基づいて、各種処理を実行する。 Execution unit 137 executes various processes based on the estimation result by the estimation unit 134. The execution unit 137 executes various processes based on the prediction result by the estimation unit 134. The execution unit 137 executes various processes based on the determination result by the determination unit 135. The execution unit 137 executes various processes based on the action plan by the planning unit 136.
 実行部137は、計画部136により生成された行動計画の情報に基づいて、移動部15を制御して行動計画に対応する行動を実行する。実行部137は、行動計画の情報に基づく移動部15の制御により、行動計画に沿って第1ロボット装置100の移動処理を実行する。 The execution unit 137 controls the moving unit 15 to execute the action corresponding to the action plan based on the information of the action plan generated by the planning unit 136. The execution unit 137 executes the movement process of the first robot device 100 according to the action plan under the control of the movement unit 15 based on the information of the action plan.
 実行部137は、計画部136により生成された行動計画の情報に基づいて、操作部16を制御して行動計画に対応する行動を実行する。実行部137は、行動計画の情報に基づく操作部16の制御により、行動計画に沿って第1ロボット装置100による物体の操作処理を実行する。 The execution unit 137 controls the operation unit 16 based on the information of the action plan generated by the planning unit 136 to execute the action corresponding to the action plan. The execution unit 137 executes the operation processing of the object by the first robot device 100 according to the action plan under the control of the operation unit 16 based on the information of the action plan.
 実行部137は、第2ロボット装置200に各種情報を送信する。実行部137は、第2ロボット装置200に各種情報を送信することにより、第2ロボット装置200の行動を制御する。実行部137は、行動計画の情報を第2ロボット装置200に送信することにより、行動計画に沿って第2ロボット装置200の行動処理を実行する。実行部137は、行動計画の情報を第2ロボット装置200に送信することにより、第2ロボット装置200に行動計画の情報に基づいて移動部25を制御させ、行動計画に沿って第2ロボット装置200の移動処理を実行する。 The execution unit 137 transmits various information to the second robot device 200. The execution unit 137 controls the behavior of the second robot device 200 by transmitting various information to the second robot device 200. The execution unit 137 transmits the information of the action plan to the second robot device 200, and executes the action processing of the second robot device 200 according to the action plan. By transmitting the action plan information to the second robot device 200, the execution unit 137 causes the second robot device 200 to control the moving unit 25 based on the action plan information, and the second robot device 200 follows the action plan. Execute 200 movement processes.
 実行部137は、判定部135による判定結果に基づいて、危険の発生を回避するための処理を実行する。実行部137は、判定部135が監視対象の行動に起因する危険の発生が予測されると判定した場合、操作部16による物体の操作を実行する。実行部137は、操作部16による監視対象に対する操作を実行する。 The execution unit 137 executes a process for avoiding the occurrence of danger based on the determination result by the determination unit 135. When the determination unit 135 determines that the occurrence of danger due to the behavior of the monitored object is predicted, the execution unit 137 executes the operation of the object by the operation unit 16. The execution unit 137 executes an operation on the monitoring target by the operation unit 16.
 実行部137は、操作部16により監視対象を移動させる操作を実行する。実行部137は、操作部16により監視対象を、危険の発生が予測されるエリアから退避させる操作を実行する。実行部137は、操作部16により監視対象の行動を抑制する操作を実行する。実行部137は、操作部16により監視対象の腕を把持する操作を実行する。 The execution unit 137 executes an operation of moving the monitoring target by the operation unit 16. The execution unit 137 executes an operation of evacuating the monitored object from the area where the occurrence of danger is predicted by the operation unit 16. The execution unit 137 executes an operation of suppressing the behavior to be monitored by the operation unit 16. The execution unit 137 executes an operation of grasping the arm to be monitored by the operation unit 16.
 実行部137は、判定部135が監視対象の行動に起因する危険の発生が予測されると判定した場合、第2ロボット装置200に危険の発生を回避するための行動を指示する。実行部137は、第2ロボット装置200に、監視対象の注意を向けさせるための行動を指示する。実行部137は、第2ロボット装置200に、音声出力するように指示する。実行部137は、第2ロボット装置200に、監視対象の視野内に位置するように指示する。 When the determination unit 135 determines that the occurrence of a danger due to the behavior to be monitored is predicted, the execution unit 137 instructs the second robot device 200 to take an action to avoid the occurrence of the danger. The execution unit 137 instructs the second robot device 200 to take an action to draw the attention of the monitored object. The execution unit 137 instructs the second robot device 200 to output voice. The execution unit 137 instructs the second robot device 200 to be located within the field of view to be monitored.
 センサ部14は、所定の情報を検知する。センサ部14は、画像を撮像する撮像手段としての画像センサ141を有する。 The sensor unit 14 detects predetermined information. The sensor unit 14 has an image sensor 141 as an image pickup means for capturing an image.
 画像センサ141は、画像情報を検知、第1ロボット装置100の視覚として機能する。例えば、画像センサ141は、第1ロボット装置100の頭部に設けられる。画像センサ141は、画像情報を撮像する。 The image sensor 141 detects the image information and functions as the visual sense of the first robot device 100. For example, the image sensor 141 is provided on the head of the first robot device 100. The image sensor 141 captures image information.
 また、センサ部14は、画像センサ141に限らず、各種センサを有してもよい。センサ部14は、近接センサを有してもよい。センサ部14は、LiDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)やToF(Time of Flight)センサやステレオカメラ等の測距センサを有してもよい。センサ部14は、GPS(Global Positioning System)センサ等の第1ロボット装置100の位置情報を検知するセンサ(位置センサ)を有してもよい。センサ部14は、力を検知し、第1ロボット装置100の触覚として機能する力覚センサを有してもよい。例えば、センサ部14は、操作部16の先端部(保持部)に設けられる力覚センサを有してもよい。センサ部14は、操作部16による物体への接触に関する検知を行う力覚センサを有してもよい。なお、センサ部14は、上記に限らず、種々のセンサを有してもよい。センサ部14は、加速度センサ、ジャイロセンサ等の種々のセンサを有してもよい。また、センサ部14における上記の各種情報を検知するセンサは共通のセンサであってもよいし、各々異なるセンサにより実現されてもよい。 Further, the sensor unit 14 is not limited to the image sensor 141, and may have various sensors. The sensor unit 14 may have a proximity sensor. The sensor unit 14 may have a range finder such as a LiDAR (Light Detection and Ringing, Laser Imaging Detection and Ringing), a ToF (Time of Flight) sensor, or a stereo camera. The sensor unit 14 may have a sensor (position sensor) that detects the position information of the first robot device 100 such as a GPS (Global Positioning System) sensor. The sensor unit 14 may have a force sensor that detects a force and functions as a tactile sense of the first robot device 100. For example, the sensor unit 14 may have a force sensor provided at the tip (holding unit) of the operation unit 16. The sensor unit 14 may have a force sensor that detects contact with an object by the operation unit 16. The sensor unit 14 is not limited to the above, and may have various sensors. The sensor unit 14 may have various sensors such as an acceleration sensor and a gyro sensor. Further, the sensors that detect the above-mentioned various information in the sensor unit 14 may be common sensors, or may be realized by different sensors.
 移動部15は、第1ロボット装置100における物理的構成を駆動する機能を有する。移動部15は、第1ロボット装置100の位置の移動を行うための機能を有する。移動部15は、例えばアクチュエータである。なお、移動部15は、第1ロボット装置100が所望の動作を実現可能であれば、どのような構成であってもよい。移動部15は、第1ロボット装置100の位置の移動等を実現可能であれば、どのような構成であってもよい。第1ロボット装置100がキャタピラやタイヤ等の移動機構を有する場合、移動部15は、キャタピラやタイヤ等を駆動する。例えば、移動部15は、実行部137による指示に応じて、第1ロボット装置100の移動機構を駆動することにより、第1ロボット装置100を移動させ、第1ロボット装置100の位置を変更する。 The moving unit 15 has a function of driving the physical configuration of the first robot device 100. The moving unit 15 has a function for moving the position of the first robot device 100. The moving unit 15 is, for example, an actuator. The moving unit 15 may have any configuration as long as the first robot device 100 can realize a desired operation. The moving unit 15 may have any configuration as long as the position of the first robot device 100 can be moved. When the first robot device 100 has a moving mechanism such as caterpillars and tires, the moving unit 15 drives the caterpillars and tires. For example, the moving unit 15 moves the first robot device 100 and changes the position of the first robot device 100 by driving the moving mechanism of the first robot device 100 in response to an instruction from the execution unit 137.
 実施形態に係る第1ロボット装置100は、操作部16を有する。操作部16は、人間でいう「手(腕)」に相当する部であり、第1ロボット装置100が他の物体に作用するための機能を実現する。第1ロボット装置100は、2本の手としての2つの操作部16を有する。なお、操作部16は、その数や第1ロボット装置100の形状に応じて、種々の位置に設けられてもよい。 The first robot device 100 according to the embodiment has an operation unit 16. The operation unit 16 is a unit corresponding to a human “hand (arm)” and realizes a function for the first robot device 100 to act on another object. The first robot device 100 has two operating units 16 as two hands. The operation units 16 may be provided at various positions depending on the number of the operation units 16 and the shape of the first robot device 100.
 操作部16は、実行部137による処理に応じて駆動する。操作部16は、物体を操作するマニピュレータである。例えば、操作部16は、アームとエンドエフェクタを有するマニピュレータであってもよい。操作部16は、物体に対して操作を行う。操作部16は、監視対象に対して操作を行う。操作部16は、監視対象の位置を移動させたり、監視対象の行動を抑制したりする操作を行う。操作部16は、オブジェクトに対して操作を行う。操作部16は、オブジェクトを把持したり、オブジェクトの位置を移動させたりする操作を行う。 The operation unit 16 is driven according to the processing by the execution unit 137. The operation unit 16 is a manipulator that operates an object. For example, the operating unit 16 may be a manipulator having an arm and an end effector. The operation unit 16 operates on the object. The operation unit 16 operates the monitored object. The operation unit 16 performs an operation of moving the position of the monitoring target or suppressing the behavior of the monitoring target. The operation unit 16 operates on the object. The operation unit 16 performs an operation of grasping the object and moving the position of the object.
 操作部16は、例えばエンドエフェクタやロボットハンド等である物体を保持する保持部と、例えばアクチュエータ等である保持部を駆動する駆動部とを有する。操作部16の保持部は、グリッパー、多指ハンド、ジャミングハンド、吸着ハンド、ソフトハンド等、所望の機能を実現可能であればどのような方式であってもよい.なお、操作部16の保持部は、物体を保持可能であればどのような構成により実現されてもよく、物体を把持する把持部であってもよいし、物体を吸着し保持する吸着部であってもよい。 The operation unit 16 has a holding unit that holds an object such as an end effector or a robot hand, and a driving unit that drives the holding unit such as an actuator. The holding unit of the operation unit 16 may be of any method as long as a desired function can be realized, such as a gripper, a multi-finger hand, a jamming hand, a suction hand, and a soft hand. The holding portion of the operating portion 16 may be realized by any configuration as long as it can hold the object, may be a gripping portion that grips the object, or is a suction portion that sucks and holds the object. There may be.
[1-4.実施形態に係る第2ロボット装置の構成]
 次に、実施形態に係る情報処理を実行する第2ロボット装置200の構成について説明する。図4は、実施形態に係る第2ロボット装置の構成例を示す図である。
[1-4. Configuration of the second robot device according to the embodiment]
Next, the configuration of the second robot device 200 that executes the information processing according to the embodiment will be described. FIG. 4 is a diagram showing a configuration example of the second robot device according to the embodiment.
 図4に示すように、第2ロボット装置200は、通信部21と、記憶部22と、制御部23と、センサ部24と、移動部25と、出力部27とを有する。このように、第2ロボット装置200は、移動手段である移動部25を有する。また、第2ロボット装置200は、所定の態様で情報を出力する出力手段である出力部27を有する。 As shown in FIG. 4, the second robot device 200 includes a communication unit 21, a storage unit 22, a control unit 23, a sensor unit 24, a moving unit 25, and an output unit 27. As described above, the second robot device 200 has a moving unit 25 which is a moving means. Further, the second robot device 200 has an output unit 27 which is an output means for outputting information in a predetermined mode.
 通信部21は、例えば、NICや通信回路等によって実現される。通信部21は、ネットワークN(インターネット等)と有線又は無線で接続され、ネットワークNを介して、他の装置等との間で情報の送受信を行う。 The communication unit 21 is realized by, for example, a NIC or a communication circuit. The communication unit 21 is connected to the network N (Internet, etc.) by wire or wirelessly, and transmits / receives information to / from other devices via the network N.
 記憶部22は、例えば、RAM、フラッシュメモリ等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。記憶部22は、各種情報を記憶する。記憶部22は、監視対象の監視に必要な各種情報を記憶する。記憶部22は、監視対象の追尾に必要な各種情報を記憶する。記憶部22は、第1ロボット装置100から受信した各種の情報を記憶する。記憶部22は、行動計画を示す情報を記憶する。記憶部22は、危険度マップを記憶してもよい。記憶部22は、画像センサ241により撮像された画像を記憶してもよい。 The storage unit 22 is realized by, for example, a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk. The storage unit 22 stores various information. The storage unit 22 stores various information necessary for monitoring the monitoring target. The storage unit 22 stores various information necessary for tracking the monitoring target. The storage unit 22 stores various types of information received from the first robot device 100. The storage unit 22 stores information indicating an action plan. The storage unit 22 may store the risk level map. The storage unit 22 may store the image captured by the image sensor 241.
 なお、記憶部22は、上記に限らず、各種の情報が記憶される。記憶部22は、第2ロボット装置200が操作部を有する場合、操作部に関する各種情報を記憶してもよい。例えば、記憶部22は、操作部の数や操作部の設置位置を示す情報を記憶してもよい。例えば、記憶部22は、物体(オブジェクト)の特定(推定)に用いる各種情報を記憶してもよい。 The storage unit 22 is not limited to the above, and various types of information are stored. When the second robot device 200 has an operation unit, the storage unit 22 may store various information about the operation unit. For example, the storage unit 22 may store information indicating the number of operation units and the installation position of the operation units. For example, the storage unit 22 may store various types of information used for identifying (estimating) an object.
 制御部23は、例えば、CPUやMPU等によって、第2ロボット装置200内部に記憶されたプログラム(例えば、本開示に係る情報処理プログラム)がRAM等を作業領域として実行されることにより実現される。また、制御部23は、例えば、ASICやFPGA等の集積回路により実現されてもよい。 The control unit 23 is realized by, for example, a CPU, an MPU, or the like executing a program stored inside the second robot device 200 (for example, an information processing program according to the present disclosure) using a RAM or the like as a work area. .. Further, the control unit 23 may be realized by an integrated circuit such as an ASIC or FPGA.
 図4に示すように、制御部23は、取得部231と、認識部232と、推定部233と、判定部234と、送信部235と、実行部236とを有し、以下に説明する情報処理の機能や作用を実現または実行する。なお、制御部23の内部構成は、図4に示した構成に限られず、後述する情報処理を行う構成であれば他の構成であってもよい。 As shown in FIG. 4, the control unit 23 includes an acquisition unit 231, a recognition unit 232, an estimation unit 233, a determination unit 234, a transmission unit 235, and an execution unit 236, and the information described below. Realize or execute the function or action of processing. The internal configuration of the control unit 23 is not limited to the configuration shown in FIG. 4, and may be another configuration as long as it is a configuration for performing information processing described later.
 取得部231は、各種情報を取得する。取得部231は、外部の情報処理装置から各種情報を取得する。取得部231は、外部の情報処理装置から各種情報を受信する。取得部231は、第1ロボット装置100から各種情報を受信する。取得部231は、記憶部22から各種情報を取得する。取得部231は、認識部232や、推定部233や、判定部234から情報を取得する。取得部231は、取得した情報を記憶部22に格納する。 The acquisition unit 231 acquires various information. The acquisition unit 231 acquires various information from an external information processing device. The acquisition unit 231 receives various information from an external information processing device. The acquisition unit 231 receives various information from the first robot device 100. The acquisition unit 231 acquires various information from the storage unit 22. The acquisition unit 231 acquires information from the recognition unit 232, the estimation unit 233, and the determination unit 234. The acquisition unit 231 stores the acquired information in the storage unit 22.
 取得部231は、センサ部24により検知されたセンサ情報を取得する。取得部231は、画像センサ241によって検知されるセンサ情報(画像情報)を取得する。取得部231は、画像センサ241により撮像された画像情報(画像)を取得する。 The acquisition unit 231 acquires the sensor information detected by the sensor unit 24. The acquisition unit 231 acquires the sensor information (image information) detected by the image sensor 241. The acquisition unit 231 acquires the image information (image) captured by the image sensor 241.
 取得部231は、第1ロボット装置100から行動計画を受信する。 The acquisition unit 231 receives the action plan from the first robot device 100.
 認識部232は、各種情報を認識する。認識部232は、各種情報を解析する。認識部232は、画像情報を解析する。認識部232は、外部の情報処理装置からの情報や記憶部22に記憶された情報に基づいて、画像情報から各種情報を解析する。認識部232は、画像情報から各種情報を特定する。認識部232は、画像情報から各種情報を抽出する。認識部232は、解析結果に基づく認識を行う。認識部232は、解析結果に基づいて、種々の情報を認識する。 The recognition unit 232 recognizes various types of information. The recognition unit 232 analyzes various information. The recognition unit 232 analyzes the image information. The recognition unit 232 analyzes various information from the image information based on the information from the external information processing device and the information stored in the storage unit 22. The recognition unit 232 identifies various types of information from the image information. The recognition unit 232 extracts various information from the image information. The recognition unit 232 performs recognition based on the analysis result. The recognition unit 232 recognizes various information based on the analysis result.
 認識部232は、画像に関する解析処理を行う。認識部232は、画像処理に関する各種処理を行う。認識部232は、取得部231により取得された画像情報(画像)に対して処理を行う。認識部232は、第2ロボット装置200が撮像した画像情報(画像)に対して処理を行う。認識部232は、画像センサ241により撮像された画像情報(画像)に対して処理を行う。認識部232は、画像処理に関する技術を適宜用いて、画像に対する処理を行う。認識部232は、オブジェクトを認識する。認識部232は、一般物体認識等の物体認識に関する種々の技術を適宜用いて、画像センサ241が検知した画像中に含まれる各物体を認識する。認識部232は、画像中の人を認識する。認識部232は、画像中の人の顔を認識する。認識部232は、顔認識機能により、監視対象の顔を認識する。認識部232は、人認識機能により、監視対象を人として認識する。 The recognition unit 232 performs analysis processing related to the image. The recognition unit 232 performs various processes related to image processing. The recognition unit 232 processes the image information (image) acquired by the acquisition unit 231. The recognition unit 232 processes the image information (image) captured by the second robot device 200. The recognition unit 232 processes the image information (image) captured by the image sensor 241. The recognition unit 232 performs processing on the image by appropriately using a technique related to image processing. The recognition unit 232 recognizes the object. The recognition unit 232 recognizes each object included in the image detected by the image sensor 241 by appropriately using various techniques related to object recognition such as general object recognition. The recognition unit 232 recognizes the person in the image. The recognition unit 232 recognizes a person's face in the image. The recognition unit 232 recognizes the face to be monitored by the face recognition function. The recognition unit 232 recognizes the monitored object as a person by the person recognition function.
 なお、認識部232が行う処理(認識処理)は、第1ロボット装置100の認識部132が行う処理よりも簡易な処理であってもよい。また、第2ロボット装置200が認識処理を行わず、第1ロボット装置100からの指示により制御される場合、第2ロボット装置200は、認識部232を有しなくてもよい。 The process (recognition process) performed by the recognition unit 232 may be a simpler process than the process performed by the recognition unit 132 of the first robot device 100. Further, when the second robot device 200 does not perform the recognition process and is controlled by the instruction from the first robot device 100, the second robot device 200 does not have to have the recognition unit 232.
 推定部233は、各種情報を推定する。推定部233は、外部の情報処理装置から取得された情報に基づいて、各種情報を推定する。推定部233は、記憶部22に記憶された情報に基づいて、各種情報を推定する。推定部233は、認識部232による認識処理の結果に基づいて、各種情報を推定する。 The estimation unit 233 estimates various types of information. The estimation unit 233 estimates various types of information based on the information acquired from the external information processing device. The estimation unit 233 estimates various types of information based on the information stored in the storage unit 22. The estimation unit 233 estimates various information based on the result of the recognition process by the recognition unit 232.
 推定部233は、各種情報を予測する。推定部233は、外部の情報処理装置から取得された情報に基づいて、各種情報を予測する。推定部233は、記憶部22に記憶された情報に基づいて、各種情報を予測する。推定部233は、認識部232による認識処理の結果に基づいて、各種情報を予測する。 The estimation unit 233 predicts various types of information. The estimation unit 233 predicts various types of information based on the information acquired from the external information processing device. The estimation unit 233 predicts various types of information based on the information stored in the storage unit 22. The estimation unit 233 predicts various information based on the result of the recognition process by the recognition unit 232.
 推定部233は、取得部231により取得された情報に基づいて、各種推定を行う。推定部233は、センサ部24により検知された各種のセンサ情報を用いて、各種推定を行う。推定部233は、画像センサ241によって検知されるセンサ情報を用いて、各種推定を行う。推定部233は、第2ロボット装置200が検知したセンサ情報を用いて、各種推定を行う。推定部233は、取得部231により取得された情報に基づいて、各種予測を行う。推定部233は、センサ部24により検知された各種のセンサ情報を用いて、各種予測を行う。推定部233は、画像センサ241によって検知されるセンサ情報を用いて、各種予測を行う。推定部233は、第2ロボット装置200が検知したセンサ情報を用いて、各種予測を行う。 The estimation unit 233 performs various estimations based on the information acquired by the acquisition unit 231. The estimation unit 233 performs various estimations using various sensor information detected by the sensor unit 24. The estimation unit 233 performs various estimations using the sensor information detected by the image sensor 241. The estimation unit 233 performs various estimations using the sensor information detected by the second robot device 200. The estimation unit 233 makes various predictions based on the information acquired by the acquisition unit 231. The estimation unit 233 makes various predictions using various sensor information detected by the sensor unit 24. The estimation unit 233 makes various predictions using the sensor information detected by the image sensor 241. The estimation unit 233 makes various predictions using the sensor information detected by the second robot device 200.
 推定部233は、取得部231により取得された画像情報に基づいて、推定処理を行う。推定部233は、第2ロボット装置200から受信された画像情報に基づいて、推定処理を行う。 The estimation unit 233 performs estimation processing based on the image information acquired by the acquisition unit 231. The estimation unit 233 performs estimation processing based on the image information received from the second robot device 200.
 なお、推定部233が行う処理(推定処理)は、第1ロボット装置100の推定部134が行う処理よりも簡易な処理であってもよい。また、第2ロボット装置200が推定処理を行わず、第1ロボット装置100からの指示により制御される場合、第2ロボット装置200は、推定部233を有しなくてもよい。 The process (estimation process) performed by the estimation unit 233 may be a simpler process than the process performed by the estimation unit 134 of the first robot device 100. Further, when the second robot device 200 does not perform the estimation process and is controlled by the instruction from the first robot device 100, the second robot device 200 does not have to have the estimation unit 233.
 判定部234は、各種情報を判定する。判定部234は、各種情報を決定する。判定部234は、各種情報を特定する。判定部234は、外部の情報処理装置から取得された情報に基づいて、各種情報を判定する。判定部234は、記憶部22に記憶された情報に基づいて、各種情報を判定する。 The determination unit 234 determines various information. The determination unit 234 determines various information. The determination unit 234 specifies various types of information. The determination unit 234 determines various types of information based on the information acquired from the external information processing device. The determination unit 234 determines various types of information based on the information stored in the storage unit 22.
 判定部234は、取得部231により取得された情報に基づいて、各種判定を行う。判定部234は、センサ部24により検知された各種のセンサ情報を用いて、各種判定を行う。判定部234は、画像センサ241によって検知されるセンサ情報を用いて、各種判定を行う。判定部234は、認識部232による認識処理の結果に基づいて、各種情報を判定する。判定部234は、推定部233による推定処理の結果に基づいて、各種情報を判定する。判定部234は、推定部233による予測処理の結果に基づいて、各種情報を判定する。 The determination unit 234 makes various determinations based on the information acquired by the acquisition unit 231. The determination unit 234 makes various determinations using various sensor information detected by the sensor unit 24. The determination unit 234 makes various determinations using the sensor information detected by the image sensor 241. The determination unit 234 determines various information based on the result of the recognition process by the recognition unit 232. The determination unit 234 determines various information based on the result of the estimation process by the estimation unit 233. The determination unit 234 determines various information based on the result of the prediction process by the estimation unit 233.
 判定部234は、取得部231により第2ロボット装置200から受信した監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。判定部234は、子どもまたはペットである監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。判定部234は、屋内の居住環境に位置する監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。 The determination unit 234 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target received from the second robot device 200 by the acquisition unit 231. The determination unit 234 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target that is a child or a pet. The determination unit 234 determines the danger caused by the behavior of the monitoring target based on the image of the monitoring target located in the indoor living environment.
 判定部234は、監視対象の行動に起因して監視対象に及ぶ危険を判定する。判定部234は、監視対象の行動に起因して監視対象以外に及ぶ危険を判定する。判定部234は、監視対象の行動に起因して監視対象以外の物体に及ぶ危険を判定する。判定部234は、監視対象の物体に対する行動に起因する危険を判定する。判定部234は、監視対象の物体への接触に起因する危険を判定する。判定部234は、監視対象の物体の把持に起因する危険を判定する。 Judgment unit 234 determines the danger to the monitored target due to the behavior of the monitored target. The determination unit 234 determines the danger to the non-monitored target due to the behavior of the monitored target. The determination unit 234 determines the danger of reaching an object other than the monitored object due to the behavior of the monitored object. The determination unit 234 determines the danger caused by the action on the object to be monitored. The determination unit 234 determines the danger caused by the contact with the object to be monitored. The determination unit 234 determines the danger caused by gripping the object to be monitored.
 判定部234は、監視対象の位置の移動に起因する危険を判定する。判定部234は、危険の発生が予測されるエリアへの監視対象の侵入に起因する危険を判定する。判定部234は、複数の第2ロボット装置200の各々から受信した監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。 The determination unit 234 determines the danger caused by the movement of the position to be monitored. The determination unit 234 determines the danger caused by the intrusion of the monitored object into the area where the danger is predicted to occur. The determination unit 234 determines the danger caused by the behavior of the monitoring target based on the images of the monitoring target received from each of the plurality of second robot devices 200.
 なお、判定部234が行う処理(判定処理)は、第1ロボット装置100の判定部135が行う処理よりも簡易な処理であってもよい。また、第2ロボット装置200が判定処理を行わず、第1ロボット装置100からの指示により制御される場合、第2ロボット装置200は、判定部234を有しなくてもよい。 The process (determination process) performed by the determination unit 234 may be a simpler process than the process performed by the determination unit 135 of the first robot device 100. Further, when the second robot device 200 does not perform the determination process and is controlled by the instruction from the first robot device 100, the second robot device 200 does not have to have the determination unit 234.
 送信部235は、外部の情報処理装置へ各種情報を送信する。送信部235は、外部の情報処理装置へ各種情報を送信する。例えば、送信部235は、第1ロボット装置100へ各種情報を送信する。送信部235は、記憶部22に記憶された情報を提供する。送信部235は、記憶部22に記憶された情報を送信する。 The transmission unit 235 transmits various information to an external information processing device. The transmission unit 235 transmits various information to an external information processing device. For example, the transmission unit 235 transmits various information to the first robot device 100. The transmission unit 235 provides the information stored in the storage unit 22. The transmission unit 235 transmits the information stored in the storage unit 22.
 送信部235は、センサ部24により検知されたセンサ情報を送信する。送信部235は、画像センサ241によって検知されるセンサ情報(画像情報)を送信する。送信部235は、画像センサ241により撮像された画像情報(画像)を送信する。送信部235は、第1ロボット装置100に画像を送信する。 The transmission unit 235 transmits the sensor information detected by the sensor unit 24. The transmission unit 235 transmits the sensor information (image information) detected by the image sensor 241. The transmission unit 235 transmits the image information (image) captured by the image sensor 241. The transmission unit 235 transmits an image to the first robot device 100.
 送信部235は、画像センサ241により撮像した監視対象の画像を第1ロボット装置100に送信する。送信部235は、画像センサ241により撮像した子どもまたはペットである監視対象の画像を第1ロボット装置100に送信する。送信部235は、画像センサ241により撮像した屋内の居住環境に位置する監視対象の画像を第1ロボット装置100に送信する。送信部235は、アラートが発行されたことを示す情報を第1ロボット装置100に転送する。 The transmission unit 235 transmits the image of the monitoring target captured by the image sensor 241 to the first robot device 100. The transmission unit 235 transmits an image of a monitoring target, which is a child or a pet, captured by the image sensor 241 to the first robot device 100. The transmission unit 235 transmits the image of the monitoring target located in the indoor living environment captured by the image sensor 241 to the first robot device 100. The transmission unit 235 transfers the information indicating that the alert has been issued to the first robot device 100.
 実行部236は、各種処理を実行する。実行部236は、外部の情報処理装置からの情報に基づいて、各種処理を実行する。実行部236は、記憶部22に記憶された情報に基づいて、各種処理を実行する。実行部236は、マップ情報記憶部221やオブジェクト情報記憶部222や危険判定用情報記憶部223に記憶された情報に基づいて、各種処理を実行する。実行部236は、取得部231により取得された情報に基づいて、各種処理を実行する。実行部236は、操作部26の操作を制御する操作制御部として機能する。 Execution unit 236 executes various processes. The execution unit 236 executes various processes based on information from an external information processing device. The execution unit 236 executes various processes based on the information stored in the storage unit 22. The execution unit 236 executes various processes based on the information stored in the map information storage unit 221 and the object information storage unit 222 and the danger determination information storage unit 223. The execution unit 236 executes various processes based on the information acquired by the acquisition unit 231. The execution unit 236 functions as an operation control unit that controls the operation of the operation unit 26.
 実行部236は、推定部233による推定結果に基づいて、各種処理を実行する。実行部236は、推定部233による予測結果に基づいて、各種処理を実行する。実行部236は、判定部234による判定結果に基づいて、各種処理を実行する。実行部236は、第1ロボット装置100から取得した行動計画に基づいて、各種処理を実行する。 The execution unit 236 executes various processes based on the estimation result by the estimation unit 233. The execution unit 236 executes various processes based on the prediction result by the estimation unit 233. The execution unit 236 executes various processes based on the determination result by the determination unit 234. The execution unit 236 executes various processes based on the action plan acquired from the first robot device 100.
 実行部236は、第1ロボット装置100から取得した行動計画の情報に基づいて、移動部25を制御して行動計画に対応する行動を実行する。実行部236は、行動計画の情報に基づく移動部25の制御により、行動計画に沿って第2ロボット装置200の移動処理を実行する。 The execution unit 236 controls the moving unit 25 to execute the action corresponding to the action plan based on the action plan information acquired from the first robot device 100. The execution unit 236 executes the movement process of the second robot device 200 according to the action plan under the control of the movement unit 25 based on the information of the action plan.
 実行部236は、第1ロボット装置100から取得した行動計画の情報に基づいて、操作部26を制御して行動計画に対応する行動を実行する。 The execution unit 236 controls the operation unit 26 to execute the action corresponding to the action plan based on the action plan information acquired from the first robot device 100.
 実行部236は、第1ロボット装置100からの指示に応じて、危険の発生を回避するための処理を実行する。実行部236は、第1ロボット装置100からの指示に応じて、監視対象の注意を向けさせるための行動を実行する。実行部236は、第2ロボット装置200に、第1ロボット装置100からの指示に応じて、出力部27により音声出力する。実行部236は、第1ロボット装置100からの指示に応じて、監視対象の視野内に位置するように移動を実行する。 The execution unit 236 executes a process for avoiding the occurrence of danger in response to an instruction from the first robot device 100. The execution unit 236 executes an action for directing the attention of the monitored object in response to an instruction from the first robot device 100. The execution unit 236 outputs voice to the second robot device 200 by the output unit 27 in response to an instruction from the first robot device 100. The execution unit 236 executes the movement so as to be located in the field of view to be monitored in response to the instruction from the first robot device 100.
 センサ部24は、所定の情報を検知する。センサ部24は、画像を撮像する撮像手段としての画像センサ241を有する。 The sensor unit 24 detects predetermined information. The sensor unit 24 has an image sensor 241 as an image pickup means for capturing an image.
 画像センサ241は、画像情報を検知、第2ロボット装置200の視覚として機能する。例えば、画像センサ241は、第2ロボット装置200の前方部分に設けられる。画像センサ241は、画像情報を撮像する。図1の例では、画像センサ241は、監視対象TGを含む画像を検知(撮像)する。 The image sensor 241 detects the image information and functions as the visual sense of the second robot device 200. For example, the image sensor 241 is provided in the front portion of the second robot device 200. The image sensor 241 captures image information. In the example of FIG. 1, the image sensor 241 detects (images) an image including the monitored TG.
 また、センサ部24は、画像センサ241に限らず、各種センサを有してもよい。センサ部24は、近接センサを有してもよい。センサ部24は、LiDARやToFセンサやステレオカメラ等の測距センサを有してもよい。センサ部24は、GPSセンサ等の第2ロボット装置200の位置情報を検知するセンサ(位置センサ)を有してもよい。センサ部24は、力を検知し、第2ロボット装置200の触覚として機能する力覚センサを有してもよい。例えば、センサ部24は、操作部26の先端部(保持部)に設けられる力覚センサを有してもよい。センサ部24は、操作部26による物体への接触に関する検知を行う力覚センサを有してもよい。なお、センサ部24は、上記に限らず、種々のセンサを有してもよい。センサ部24は、加速度センサ、ジャイロセンサ等の種々のセンサを有してもよい。また、センサ部24における上記の各種情報を検知するセンサは共通のセンサであってもよいし、各々異なるセンサにより実現されてもよい。 Further, the sensor unit 24 is not limited to the image sensor 241 and may have various sensors. The sensor unit 24 may have a proximity sensor. The sensor unit 24 may have a distance measuring sensor such as a LiDAR, a ToF sensor, or a stereo camera. The sensor unit 24 may have a sensor (position sensor) that detects the position information of the second robot device 200 such as a GPS sensor. The sensor unit 24 may have a force sensor that detects a force and functions as a tactile sense of the second robot device 200. For example, the sensor unit 24 may have a force sensor provided at the tip (holding unit) of the operation unit 26. The sensor unit 24 may have a force sensor that detects contact with an object by the operation unit 26. The sensor unit 24 is not limited to the above, and may have various sensors. The sensor unit 24 may have various sensors such as an acceleration sensor and a gyro sensor. Further, the sensors that detect the above-mentioned various information in the sensor unit 24 may be common sensors, or may be realized by different sensors.
 移動部25は、第2ロボット装置200における物理的構成を駆動する機能を有する。移動部25は、第2ロボット装置200の位置の移動を行うための機能を有する。移動部25は、例えばアクチュエータである。なお、移動部25は、第2ロボット装置200が所望の動作を実現可能であれば、どのような構成であってもよい。移動部25は、第2ロボット装置200の位置の移動等を実現可能であれば、どのような構成であってもよい。第2ロボット装置200がキャタピラやタイヤ等の移動機構を有する場合、移動部25は、キャタピラやタイヤ等を駆動する。また、第2ロボット装置200が浮遊する場合、移動部25は、第2ロボット装置200が浮遊した状態での移動を実現する構成(例えば回転翼機等)であってもよい。例えば、移動部25は、実行部236による指示に応じて、第2ロボット装置200の移動機構を駆動することにより、第2ロボット装置200を移動させ、第2ロボット装置200の位置を変更する。 The moving unit 25 has a function of driving the physical configuration of the second robot device 200. The moving unit 25 has a function for moving the position of the second robot device 200. The moving unit 25 is, for example, an actuator. The moving unit 25 may have any configuration as long as the second robot device 200 can realize a desired operation. The moving unit 25 may have any configuration as long as the position of the second robot device 200 can be moved. When the second robot device 200 has a moving mechanism such as caterpillars and tires, the moving unit 25 drives the caterpillars and tires. Further, when the second robot device 200 floats, the moving unit 25 may have a configuration (for example, a rotary wing machine or the like) that realizes the movement of the second robot device 200 in a floating state. For example, the moving unit 25 moves the second robot device 200 and changes the position of the second robot device 200 by driving the moving mechanism of the second robot device 200 in response to an instruction from the execution unit 236.
 出力部27は、各種の出力を行う。出力部27は、各種情報を出力する。出力部27は、音声を出力する機能を有する。例えば、出力部27は、音声を出力するスピーカーを有する。出力部27は、光等、種々の態様による出力を行う機能を有してもよい。また、出力部27は、監視対象の注意を引くことが可能な出力機能を有する。なお、第2ロボット装置200は、監視対象の注意を引くための出力を行わない場合、第2ロボット装置200は、出力部27を有しなくてもよい。 The output unit 27 performs various outputs. The output unit 27 outputs various information. The output unit 27 has a function of outputting audio. For example, the output unit 27 has a speaker that outputs sound. The output unit 27 may have a function of outputting in various modes such as light. In addition, the output unit 27 has an output function capable of attracting the attention of the monitored object. If the second robot device 200 does not output to attract the attention of the monitored object, the second robot device 200 does not have to have the output unit 27.
[1-5.実施形態に係る情報処理の手順]
 次に、図5を用いて、実施形態に係る情報処理の手順について説明する。図5は、実施形態に係る情報処理の手順を示すフローチャートである。
[1-5. Information processing procedure according to the embodiment]
Next, the procedure of information processing according to the embodiment will be described with reference to FIG. FIG. 5 is a flowchart showing an information processing procedure according to the embodiment.
 図5に示すように、情報処理システム1は、第2ロボットが画像センサにより撮像した監視対象の画像を第1ロボットに送信する(ステップS101)。例えば、第2ロボット装置200は、画像センサ241により撮像した監視対象の画像を第1ロボット装置100に送信する。 As shown in FIG. 5, the information processing system 1 transmits the image of the monitoring target captured by the second robot by the image sensor to the first robot (step S101). For example, the second robot device 200 transmits the image of the monitoring target captured by the image sensor 241 to the first robot device 100.
 情報処理システム1は、第1ロボットが第2ロボットから受信した監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する(ステップS102)。例えば、第1ロボット装置100は、第2ロボット装置200から受信した監視対象の画像に基づいて、監視対象の行動に起因する危険を判定する。 The information processing system 1 determines the danger caused by the behavior of the monitored target based on the image of the monitored target received by the first robot from the second robot (step S102). For example, the first robot device 100 determines the danger caused by the behavior of the monitored target based on the image of the monitored target received from the second robot device 200.
 そして、情報処理システム1は、第1ロボットが判定結果に基づいて、危険の発生を回避するための処理を実行する(ステップS103)。例えば、第1ロボット装置100は、判定結果に基づいて、危険の発生を回避するための処理を実行する。 Then, in the information processing system 1, the first robot executes a process for avoiding the occurrence of danger based on the determination result (step S103). For example, the first robot device 100 executes a process for avoiding the occurrence of danger based on the determination result.
[1-6.情報処理システムの構成の概念図]
 ここで、図6を用いて、情報処理システム1の第1ロボット装置100や第2ロボット装置200における各機能やハードウェア構成やデータを概念的に示す。図6は、情報処理システムの構成の概念図の一例を示す図である。
[1-6. Conceptual diagram of the configuration of the information processing system]
Here, with reference to FIG. 6, each function, hardware configuration, and data in the first robot device 100 and the second robot device 200 of the information processing system 1 are conceptually shown. FIG. 6 is a diagram showing an example of a conceptual diagram of the configuration of the information processing system.
 図6に示すように大ロボットである第1ロボット装置100は、撮影センサやオブジェクト認識機能や自己位置推定機能や危険度マップ推定機能や外部通信機能や子供退避機能や移動体を有する。例えば、撮影センサは、画像センサ141に対応する。オブジェクト認識機能は、認識部132に対応する。自己位置推定機能は、推定部134に対応する。危険度マップ推定機能は、生成部133や推定部134に対応する。外部通信機能は、通信部11や取得部131や実行部137に対応する。子供退避機能は、実行部137や操作部16に対応する。移動体は、移動部15に対応する。 As shown in FIG. 6, the first robot device 100, which is a large robot, has a photographing sensor, an object recognition function, a self-position estimation function, a risk map estimation function, an external communication function, a child evacuation function, and a moving body. For example, the photographing sensor corresponds to the image sensor 141. The object recognition function corresponds to the recognition unit 132. The self-position estimation function corresponds to the estimation unit 134. The risk map estimation function corresponds to the generation unit 133 and the estimation unit 134. The external communication function corresponds to the communication unit 11, the acquisition unit 131, and the execution unit 137. The child evacuation function corresponds to the execution unit 137 and the operation unit 16. The moving body corresponds to the moving unit 15.
 図6に示すように小ロボットである第2ロボット装置200は、撮影センサやオブジェクト認識機能や人体認識機能(追跡機能)や外部通信機能や移動体を有する。例えば、撮影センサは、画像センサ241に対応する。オブジェクト認識機能は、認識部232に対応する。人体認識機能(追跡機能)は、認識部232や実行部236に対応する。外部通信機能は、通信部21や取得部231や送信部235に対応する。移動体は、移動部25に対応する。 As shown in FIG. 6, the second robot device 200, which is a small robot, has a photographing sensor, an object recognition function, a human body recognition function (tracking function), an external communication function, and a mobile body. For example, the photographing sensor corresponds to the image sensor 241. The object recognition function corresponds to the recognition unit 232. The human body recognition function (tracking function) corresponds to the recognition unit 232 and the execution unit 236. The external communication function corresponds to the communication unit 21, the acquisition unit 231 and the transmission unit 235. The moving body corresponds to the moving unit 25.
 例えば、情報処理システム1は、大規模な処理量のものを大ロボット(第1ロボット装置100)にオフロードして、小ロボット(第2ロボット装置200)は最低限の機能だけをもつようにする。これにより、情報処理システム1は、小ロボットのコストを下げること等により、省コスト化できる。 For example, the information processing system 1 offloads a large-scale processing amount to a large robot (first robot device 100) so that the small robot (second robot device 200) has only the minimum functions. To do. As a result, the information processing system 1 can save costs by reducing the cost of the small robot and the like.
[1-7.情報処理システムの処理例]
 次に、図7~図12を用いて、実施形態に係る各種処理の流れについて説明する。
[1-7. Information processing system processing example]
Next, the flow of various processes according to the embodiment will be described with reference to FIGS. 7 to 12.
[1-7-1.探索処理の例]
 まず、図7を用いて、探索処理の例を示す。図7は、情報処理システムにおける探索処理の一例を示す図である。具体的には、図7は、情報処理システム1における探索処理の例を示す。なお、図7に示すステップ番号は、処理の説明を行うためのもの(符号)であり処理の順序を示すものではない。
[1-7-1. Example of search processing]
First, an example of the search process is shown with reference to FIG. 7. FIG. 7 is a diagram showing an example of search processing in an information processing system. Specifically, FIG. 7 shows an example of search processing in the information processing system 1. The step numbers shown in FIG. 7 are for explaining the processing (reference numerals) and do not indicate the order of the processing.
 図7に示すように、第2ロボット装置200は、カメラ画像を撮影する(ステップS201)。例えば、第2ロボット装置200は、画像センサ241により画像を撮影する。 As shown in FIG. 7, the second robot device 200 captures a camera image (step S201). For example, the second robot device 200 captures an image by the image sensor 241.
 第2ロボット装置200は、顔や人を認識する(ステップS202)。例えば、第2ロボット装置200は、画像に含まれる顔や人を認識する。 The second robot device 200 recognizes a face or a person (step S202). For example, the second robot device 200 recognizes a face or a person included in an image.
 第2ロボット装置200は、オブジェクトを認識する(ステップS203)。例えば、第2ロボット装置200は、画像に含まれるオブジェクトを認識する。 The second robot device 200 recognizes the object (step S203). For example, the second robot device 200 recognizes an object included in the image.
 第2ロボット装置200は、判定を行う(ステップS204)。例えば、第2ロボット装置200は、画像に人が含まれるかを判定する。 The second robot device 200 makes a determination (step S204). For example, the second robot device 200 determines whether or not a person is included in the image.
 第2ロボット装置200は、画像に人が含まれると判定した場合(ステップS204:Yes)、顔の位置を推定する(ステップS205)。例えば、第2ロボット装置200は、画像に含まれる人の顔の位置を推定する。そして、第2ロボット装置200は、ステップS206の処理を行う。 When the second robot device 200 determines that the image includes a person (step S204: Yes), the second robot device 200 estimates the position of the face (step S205). For example, the second robot device 200 estimates the position of a person's face included in the image. Then, the second robot device 200 performs the process of step S206.
 第2ロボット装置200は、画像に人が含まれないと判定した場合(ステップS204:No)、ステップS205の処理を行うことなく、ステップS206の処理を行う。 When it is determined that the image does not include a person (step S204: No), the second robot device 200 performs the process of step S206 without performing the process of step S205.
 第2ロボット装置200は、自己位置制御を行う(ステップS206)。例えば、第2ロボット装置200は、推定される自己位置に基づく制御を行う。 The second robot device 200 performs self-position control (step S206). For example, the second robot device 200 performs control based on the estimated self-position.
 第2ロボット装置200は、ステップS201の処理を行ったり、情報を転送したりする(ステップS207)。第2ロボット装置200は、カメラ画像の撮影を繰り返したり、第1ロボット装置100に情報を転送したりする。例えば、第2ロボット装置200は、第1ロボット装置100にカメラ画像を転送する。 The second robot device 200 performs the process of step S201 and transfers information (step S207). The second robot device 200 repeatedly takes a camera image and transfers information to the first robot device 100. For example, the second robot device 200 transfers a camera image to the first robot device 100.
 第1ロボット装置100は、情報を受信する(ステップS208)。例えば、第1ロボット装置100は、第2ロボット装置200からカメラ画像を受信する。 The first robot device 100 receives the information (step S208). For example, the first robot device 100 receives a camera image from the second robot device 200.
 第1ロボット装置100は、小ロボットの自己位置を推定する(ステップS209)。例えば、第1ロボット装置100は、第2ロボット装置200から受信したカメラ画像を基に、小ロボットである第2ロボット装置200の位置を推定する。 The first robot device 100 estimates the self-position of the small robot (step S209). For example, the first robot device 100 estimates the position of the second robot device 200, which is a small robot, based on the camera image received from the second robot device 200.
 第1ロボット装置100は、ローカライズが成功したか否かを判定する(ステップS210)。例えば、第1ロボット装置100は、第2ロボット装置200から受信したカメラ画像を基に第2ロボット装置200の位置が推定できるか否かを判定する。第1ロボット装置100は、ローカライズが成功しなかった場合(ステップS210:No)、ステップS208で受信した情報でのローカライズの処理を終了する。例えば、第1ロボット装置100は、第2ロボット装置200から受信したカメラ画像を基に第2ロボット装置200の位置が推定できなかった場合、そのカメラ画像で第2ロボット装置200の位置を推定する処理を終了する。 The first robot device 100 determines whether or not the localization is successful (step S210). For example, the first robot device 100 determines whether or not the position of the second robot device 200 can be estimated based on the camera image received from the second robot device 200. If the localization is not successful (step S210: No), the first robot device 100 ends the localization process with the information received in step S208. For example, when the position of the second robot device 200 cannot be estimated based on the camera image received from the second robot device 200, the first robot device 100 estimates the position of the second robot device 200 from the camera image. End the process.
 第1ロボット装置100は、ローカライズを成功した場合(ステップS210:Yes)、危険度マップを更新する(ステップS211)。例えば、第1ロボット装置100は、第2ロボット装置200から受信したカメラ画像を基に危険度マップを更新する。 When the localization is successful (step S210: Yes), the first robot device 100 updates the risk map (step S211). For example, the first robot device 100 updates the risk map based on the camera image received from the second robot device 200.
 第1ロボット装置100は、大ロボットの位置制御を行う(ステップS212)。例えば、第1ロボット装置100は、推定される自己位置に基づく制御を行う。 The first robot device 100 controls the position of the large robot (step S212). For example, the first robot device 100 performs control based on the estimated self-position.
 第1ロボット装置100は、カメラ画像を撮影する(ステップS213)。例えば、第1ロボット装置100は、画像センサ141により画像を撮影する。 The first robot device 100 captures a camera image (step S213). For example, the first robot device 100 captures an image by the image sensor 141.
 第1ロボット装置100は、大ロボットの自己位置を推定する(ステップS214)。例えば、第1ロボット装置100は、撮影したカメラ画像を基に、大ロボットである第1ロボット装置100の位置を推定する。 The first robot device 100 estimates the self-position of the large robot (step S214). For example, the first robot device 100 estimates the position of the first robot device 100, which is a large robot, based on the captured camera image.
 第1ロボット装置100は、オブジェクトを認識する(ステップS215)。例えば、第1ロボット装置100は、画像に含まれるオブジェクトを認識する。 The first robot device 100 recognizes an object (step S215). For example, the first robot device 100 recognizes an object included in the image.
 第1ロボット装置100は、危険度マップを更新する(ステップS211)。例えば、第1ロボット装置100は、撮影したカメラ画像を基に、危険度マップを更新する。そして、第1ロボット装置100は、ステップS212の処理を行う。 The first robot device 100 updates the risk map (step S211). For example, the first robot device 100 updates the risk map based on the captured camera image. Then, the first robot device 100 performs the process of step S212.
 上記のように、情報処理システム1は、大ロボット(第1ロボット装置100)のオブジェクト認識に加えて、小ロボット(第2ロボット装置200)の顔、人認識を使って子供を発見したり、追跡したりすることができる。これにより、情報処理システム1は、子供を探す際の各ロボットの動きを制御する等により、適切に子供の探索を行うことができる。 As described above, the information processing system 1 discovers a child by using the face and human recognition of the small robot (second robot device 200) in addition to the object recognition of the large robot (first robot device 100). It can be tracked. As a result, the information processing system 1 can appropriately search for a child by controlling the movement of each robot when searching for the child.
[1-7-2.探索処理の他の例]
 次に、図8を用いて、探索処理の他の例を示す。図8は、情報処理システムにおける探索処理の他の一例を示す図である。具体的には、図8は、情報処理システム1における探索処理の他の例を示す。なお、図8に示すステップ番号は、処理の説明を行うためのもの(符号)であり処理の順序を示すものではない。また、図7と同様の処理については適宜説明を省略する。
[1-7-2. Other examples of search processing]
Next, another example of the search process is shown with reference to FIG. FIG. 8 is a diagram showing another example of the search process in the information processing system. Specifically, FIG. 8 shows another example of the search process in the information processing system 1. The step numbers shown in FIG. 8 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIG. 7 will be omitted as appropriate.
 図8に示すように、第2ロボット装置200は、カメラ画像を撮影する(ステップS301)。第2ロボット装置200は、顔や人を認識する(ステップS302)。第2ロボット装置200は、オブジェクトを認識する(ステップS303)。例えば、第2ロボット装置200は、画像に含まれるオブジェクトを認識する。 As shown in FIG. 8, the second robot device 200 captures a camera image (step S301). The second robot device 200 recognizes a face or a person (step S302). The second robot device 200 recognizes the object (step S303). For example, the second robot device 200 recognizes an object included in the image.
 第2ロボット装置200は、画像に人が含まれると判定した場合(ステップS304:Yes)、顔の位置を推定する(ステップS305)。第2ロボット装置200は、画像に人が含まれないと判定した場合(ステップS304:No)、ステップS305の処理を行うことなく、ステップS306の処理を行う。 When the second robot device 200 determines that the image includes a person (step S304: Yes), the second robot device 200 estimates the position of the face (step S305). When it is determined that the image does not include a person (step S304: No), the second robot device 200 performs the process of step S306 without performing the process of step S305.
 第2ロボット装置200は、自己位置制御を行う(ステップS306)。第2ロボット装置200は、ステップS301の処理を行ったり、情報を転送したりする(ステップS307)。 The second robot device 200 performs self-position control (step S306). The second robot device 200 performs the process of step S301 and transfers information (step S307).
 第1ロボット装置100は、情報を受信する(ステップS308)。第1ロボット装置100は、小ロボットの自己位置を推定する(ステップS309)。 The first robot device 100 receives the information (step S308). The first robot device 100 estimates the self-position of the small robot (step S309).
 第1ロボット装置100は、ローカライズが成功しなかった場合(ステップS310:No)、ステップS308で受信した情報でのローカライズの処理を終了する。第1ロボット装置100は、ローカライズを成功した場合(ステップS310:Yes)、危険度マップを更新する(ステップS311)。 If the localization is not successful (step S310: No), the first robot device 100 ends the localization process with the information received in step S308. When the localization is successful (step S310: Yes), the first robot device 100 updates the risk map (step S311).
 第1ロボット装置100は、大ロボットの位置制御を行ったり(ステップS312)、ステップS316の処理を行ったりする。第1ロボット装置100は、カメラ画像を撮影する(ステップS313)。第1ロボット装置100は、大ロボットの自己位置を推定する(ステップS314)。第1ロボット装置100は、オブジェクトを認識する(ステップS315)。第1ロボット装置100は、危険度マップを更新する(ステップS311)。例えば、第1ロボット装置100は、撮影したカメラ画像を基に、危険度マップを更新する。そして、第1ロボット装置100は、ステップS312の処理を行ったり、ステップS316の処理を行ったりする。 The first robot device 100 controls the position of the large robot (step S312) and performs the process of step S316. The first robot device 100 captures a camera image (step S313). The first robot device 100 estimates the self-position of the large robot (step S314). The first robot device 100 recognizes the object (step S315). The first robot device 100 updates the risk map (step S311). For example, the first robot device 100 updates the risk map based on the captured camera image. Then, the first robot device 100 performs the process of step S312 and the process of step S316.
 第1ロボット装置100は、小ロボットの経路計画を行う(ステップS316)。例えば、第1ロボット装置100は、危険度マップを基に第2ロボット装置200の経路計画を生成する。 The first robot device 100 plans the route of the small robot (step S316). For example, the first robot device 100 generates a route plan for the second robot device 200 based on the risk map.
 第1ロボット装置100は、情報を転送する(ステップS317)。例えば、第1ロボット装置100は、生成した経路計画を第2ロボット装置200に転送する。 The first robot device 100 transfers information (step S317). For example, the first robot device 100 transfers the generated route plan to the second robot device 200.
 第2ロボット装置200は、情報を受信する(ステップS318)。例えば、第2ロボット装置200は、第1ロボット装置100から経路計画を受信する。 The second robot device 200 receives the information (step S318). For example, the second robot device 200 receives a route plan from the first robot device 100.
 第2ロボット装置200は、受信した経路計画を基に自己位置を制御する(ステップS306)。例えば、第2ロボット装置200は、第1ロボット装置100から受信した経路計画に基づく経路を移動する制御を行う。 The second robot device 200 controls its own position based on the received route plan (step S306). For example, the second robot device 200 controls to move a route based on the route plan received from the first robot device 100.
 上記のように、情報処理システム1は、小ロボット(第2ロボット装置200)が子供を見つけられない場合、小ロボットと大ロボット(第1ロボット装置100)の各々の動きを把握して、探索箇所を分担すること等ができる。これにより、情報処理システム1は、子供を探す際の各ロボットの動きを制御する等により、適切に子供の探索を行うことができる。 As described above, when the small robot (second robot device 200) cannot find the child, the information processing system 1 grasps and searches the movements of the small robot and the large robot (first robot device 100). You can share the parts. As a result, the information processing system 1 can appropriately search for a child by controlling the movement of each robot when searching for the child.
[1-7-3.誤飲抑制処理の例]
 次に、図9を用いて、誤飲抑制処理の例を示す。図9は、情報処理システムにおける誤飲抑制処理の一例を示す図である。具体的には、図9は、情報処理システム1における誤飲抑制処理の例を示す。なお、図9に示すステップ番号は、処理の説明を行うためのもの(符号)であり処理の順序を示すものではない。また、図7や図8と同様の処理については適宜説明を省略する。
[1-7-3. Example of accidental ingestion suppression treatment]
Next, an example of accidental ingestion suppression treatment is shown with reference to FIG. FIG. 9 is a diagram showing an example of accidental ingestion suppression processing in an information processing system. Specifically, FIG. 9 shows an example of accidental ingestion suppression processing in the information processing system 1. The step numbers shown in FIG. 9 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIGS. 7 and 8 will be omitted as appropriate.
 図9に示すように、第2ロボット装置200は、カメラ画像を撮影する(ステップS401)。第2ロボット装置200は、顔や人を認識する(ステップS402)。第2ロボット装置200は、オブジェクトを認識する(ステップS403)。例えば、第2ロボット装置200は、画像に含まれるオブジェクトを認識する。 As shown in FIG. 9, the second robot device 200 captures a camera image (step S401). The second robot device 200 recognizes a face or a person (step S402). The second robot device 200 recognizes the object (step S403). For example, the second robot device 200 recognizes an object included in the image.
 第2ロボット装置200は、顔の位置を推定する(ステップS404)。例えば、第2ロボット装置200は、画像に含まれる人の顔の位置を推定する。第2ロボット装置200は、自己位置制御を行う(ステップS405)。 The second robot device 200 estimates the position of the face (step S404). For example, the second robot device 200 estimates the position of a person's face included in the image. The second robot device 200 performs self-position control (step S405).
 第2ロボット装置200は、危険判定を行う(ステップS406)。例えば、第2ロボット装置200は、画像に含まれる人が誤飲する可能性は有るかを判定する。例えば、第2ロボット装置200は、画像に含まれる人の顔の付近に他の物体が位置するかを判定する。 The second robot device 200 makes a danger determination (step S406). For example, the second robot device 200 determines whether or not a person included in the image may accidentally swallow it. For example, the second robot device 200 determines whether another object is located near the human face included in the image.
 第2ロボット装置200は、危険があると判定した場合(ステップS407:Yes)、アラートを発行する(ステップS408)。そして、第2ロボット装置200は、ステップS409の処理を行う。 When the second robot device 200 determines that there is a danger (step S407: Yes), the second robot device 200 issues an alert (step S408). Then, the second robot device 200 performs the process of step S409.
 第2ロボット装置200は、危険がないと判定した場合(ステップS407:No)、ステップS408の処理を行うことなく、ステップS409の処理を行う。 When the second robot device 200 determines that there is no danger (step S407: No), the second robot device 200 performs the process of step S409 without performing the process of step S408.
 第2ロボット装置200は、情報を転送する(ステップS409)。例えば、第2ロボット装置200は、第1ロボット装置100にカメラ画像を転送する。また、第2ロボット装置200は、アラートを発行した場合、アラートを含む情報を、第1ロボット装置100に転送する。第2ロボット装置200は、アラートが発行されたことを示す情報を第1ロボット装置100に転送する。 The second robot device 200 transfers information (step S409). For example, the second robot device 200 transfers a camera image to the first robot device 100. When the second robot device 200 issues an alert, the second robot device 200 transfers the information including the alert to the first robot device 100. The second robot device 200 transfers information indicating that an alert has been issued to the first robot device 100.
 第1ロボット装置100は、情報を受信する(ステップS410)。第1ロボット装置100は、アラートが発行された場合、アラートが発行されたことを示す情報を受信する。 The first robot device 100 receives the information (step S410). When the alert is issued, the first robot device 100 receives information indicating that the alert has been issued.
 第1ロボット装置100は、アラートを確認する(ステップS411)。第1ロボット装置100は、第2ロボット装置200から受信した情報に、アラートが発行されたことを示す情報が含まれるかを確認する。 The first robot device 100 confirms the alert (step S411). The first robot device 100 confirms whether the information received from the second robot device 200 includes information indicating that an alert has been issued.
 第1ロボット装置100は、手元から異物を退避させる(ステップS412)。第1ロボット装置100は、第2ロボット装置200から受信した情報に、アラートが発行されたことを示す情報が含まれる場合、監視対象の手元から異物を退避させる。 The first robot device 100 retracts the foreign matter from the hand (step S412). When the information received from the second robot device 200 includes information indicating that an alert has been issued, the first robot device 100 evacuates the foreign matter from the hand of the monitoring target.
 第1ロボット装置100は、小ロボットの自己位置を推定する(ステップS413)。第1ロボット装置100は、危険度マップを更新したり(ステップS414)、ステップS418の処理を行ったりする。 The first robot device 100 estimates the self-position of the small robot (step S413). The first robot device 100 updates the risk map (step S414) and performs the process of step S418.
 第1ロボット装置100は、カメラ画像を撮影する(ステップS415)。第1ロボット装置100は、大ロボットの自己位置を推定する(ステップS416)。第1ロボット装置100は、オブジェクトを認識する(ステップS417)。第1ロボット装置100は、危険度マップを更新する(ステップS414)。 The first robot device 100 captures a camera image (step S415). The first robot device 100 estimates the self-position of the large robot (step S416). The first robot device 100 recognizes an object (step S417). The first robot device 100 updates the risk map (step S414).
 第1ロボット装置100は、大ロボットの位置制御を行う(ステップS418)。 The first robot device 100 controls the position of the large robot (step S418).
 第1ロボット装置100は、危険判定を行う(ステップS419)。例えば、第1ロボット装置100は、監視対象に関する危険が有るかを判定する。第1ロボット装置100は、画像に含まれる監視対象に関する危険が有るかを判定する。 The first robot device 100 makes a danger determination (step S419). For example, the first robot device 100 determines whether there is a danger related to the monitored object. The first robot device 100 determines whether or not there is a danger related to the monitoring target included in the image.
 第1ロボット装置100は、危険があると判定した場合(ステップS420:Yes)、対象を危険から退避させる(ステップS421)。第1ロボット装置100は、監視対象を危険から退避させる。そして、第1ロボット装置100は、ステップS415の処理を繰り返す。 When the first robot device 100 determines that there is a danger (step S420: Yes), the first robot device 100 evacuates the target from the danger (step S421). The first robot device 100 evacuates the monitored object from danger. Then, the first robot device 100 repeats the process of step S415.
 第1ロボット装置100は、危険がないと判定した場合(ステップS420:No)、ステップS410で受信した情報での危険判定の処理を終了する。そして、第1ロボット装置100は、ステップS415の処理を繰り返す。 When the first robot device 100 determines that there is no danger (step S420: No), the first robot device 100 ends the process of determining the danger based on the information received in step S410. Then, the first robot device 100 repeats the process of step S415.
 上記のように、情報処理システム1は、誤飲抑制処理を行うことにより、誤飲しそうな時には小ロボット(第2ロボット装置200)からアラートを発行しながら、大ロボット(第1ロボット装置100)に情報を送り、大ロボットのアームで防止すること等ができる。これにより、情報処理システム1は、監視対象が口に誤飲する危険があるもの(オブジェクト)を運ぼうとしているのを検知し他場合、オブジェクトを排除する等により、適切に監視対象による誤飲を抑制することができる。 As described above, the information processing system 1 performs the accidental ingestion suppression process, and when the accidental ingestion is likely to occur, the small robot (second robot device 200) issues an alert while the large robot (first robot device 100). Information can be sent to and prevented by the arm of a large robot. As a result, the information processing system 1 detects that the monitored object is trying to carry an object that is in danger of being accidentally swallowed in the mouth, and in other cases, by excluding the object or the like, accidentally swallowed by the monitored object. Can be suppressed.
[1-7-4.退避誘導処理の例]
 次に、図10を用いて、退避誘導処理の例を示す。図10は、情報処理システムにおける退避誘導処理の一例を示す図である。具体的には、図10は、情報処理システム1における退避誘導処理の例を示す。なお、図10に示すステップ番号は、処理の説明を行うためのもの(符号)であり処理の順序を示すものではない。また、図7~図9と同様の処理については適宜説明を省略する。
[1-7-4. Example of evacuation guidance processing]
Next, an example of the evacuation guidance process is shown with reference to FIG. FIG. 10 is a diagram showing an example of evacuation guidance processing in the information processing system. Specifically, FIG. 10 shows an example of evacuation guidance processing in the information processing system 1. The step numbers shown in FIG. 10 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIGS. 7 to 9 will be omitted as appropriate.
 図10に示すように、第2ロボット装置200は、カメラ画像を撮影する(ステップS501)。第2ロボット装置200は、顔や人を認識する(ステップS502)。第2ロボット装置200は、顔の位置を推定する(ステップS503)。第2ロボット装置200は、ステップS506の処理を行う。第2ロボット装置200は、自己位置制御を行う(ステップS504)。 As shown in FIG. 10, the second robot device 200 captures a camera image (step S501). The second robot device 200 recognizes a face or a person (step S502). The second robot device 200 estimates the position of the face (step S503). The second robot device 200 performs the process of step S506. The second robot device 200 performs self-position control (step S504).
 そして、第2ロボット装置200は、子供の気になる音を出しながら退避する(ステップS505)。例えば、第2ロボット装置200は、音声出力を行いながら監視対象から離れるように移動することで、監視対象が第2ロボット装置200を追従するように退避する。 Then, the second robot device 200 evacuates while making a sound that the child is interested in (step S505). For example, the second robot device 200 moves away from the monitoring target while outputting voice, so that the monitoring target evacuates so as to follow the second robot device 200.
 第2ロボット装置200は、情報を転送する(ステップS506)。第1ロボット装置100は、情報を受信する(ステップS507)。第1ロボット装置100は、小ロボットの自己位置を推定する(ステップS508)。第1ロボット装置100は、危険度マップを更新したり(ステップS509)、ステップS513の処理を行ったりする。 The second robot device 200 transfers information (step S506). The first robot device 100 receives the information (step S507). The first robot device 100 estimates the self-position of the small robot (step S508). The first robot device 100 updates the risk map (step S509) and performs the process of step S513.
 第1ロボット装置100は、カメラ画像を撮影する(ステップS510)。第1ロボット装置100は、大ロボットの自己位置を推定する(ステップS511)。第1ロボット装置100は、オブジェクトを認識する(ステップS512)。第1ロボット装置100は、危険度マップを更新する(ステップS509)。 The first robot device 100 captures a camera image (step S510). The first robot device 100 estimates the self-position of the large robot (step S511). The first robot device 100 recognizes the object (step S512). The first robot device 100 updates the risk map (step S509).
 第1ロボット装置100は、大ロボットの位置制御を行う(ステップS513)。 The first robot device 100 controls the position of the large robot (step S513).
 第1ロボット装置100は、危険判定を行う(ステップS514)。例えば、第1ロボット装置100は、監視対象に関する危険が有るかを判定する。第1ロボット装置100は、画像に含まれる監視対象に関する危険が有るかを判定する。 The first robot device 100 makes a danger determination (step S514). For example, the first robot device 100 determines whether there is a danger related to the monitored object. The first robot device 100 determines whether or not there is a danger related to the monitoring target included in the image.
 第1ロボット装置100は、危険がないと判定した場合(ステップS515:No)、ステップS507で受信した情報での危険判定の処理を終了する。そして、第1ロボット装置100は、ステップS510の処理を繰り返す。 When the first robot device 100 determines that there is no danger (step S515: No), the first robot device 100 ends the process of determining the danger based on the information received in step S507. Then, the first robot device 100 repeats the process of step S510.
 第1ロボット装置100は、危険があると判定した場合(ステップS515:Yes)、小ロボットの経路計画を行う(ステップS516)。例えば、第1ロボット装置100は、危険度マップを基に、第2ロボット装置200が子供の気になる音を出しながら退避するように、第2ロボット装置200の経路計画を生成する。 When the first robot device 100 determines that there is a danger (step S515: Yes), the first robot device 100 plans the route of the small robot (step S516). For example, the first robot device 100 generates a route plan for the second robot device 200 based on the risk map so that the second robot device 200 evacuates while making a sound that the child is interested in.
 第1ロボット装置100は、情報を転送する(ステップS517)。例えば、第1ロボット装置100は、生成した経路計画を第2ロボット装置200に転送する。 The first robot device 100 transfers information (step S517). For example, the first robot device 100 transfers the generated route plan to the second robot device 200.
 第2ロボット装置200は、情報を受信する(ステップS518)。例えば、第2ロボット装置200は、第1ロボット装置100から経路計画を受信する。 The second robot device 200 receives the information (step S518). For example, the second robot device 200 receives a route plan from the first robot device 100.
 第2ロボット装置200は、受信した経路計画を基に自己位置を制御する(ステップS504)。例えば、第2ロボット装置200は、第1ロボット装置100から受信した経路計画に基づいて、子供の気になる音を出しながら退避する(ステップS505)。 The second robot device 200 controls its own position based on the received route plan (step S504). For example, the second robot device 200 evacuates while making a sound that the child is interested in based on the route plan received from the first robot device 100 (step S505).
 上記のように、情報処理システム1は、退避誘導処理を行うことにより、大ロボット(第1ロボット装置100)が危険を判断したら、小ロボット(第2ロボット装置200)の経路計画を作成してコマンドを発行すること等ができる。これにより、情報処理システム1は、子供が花瓶の下にいて危ないときに、小ロボットに退避させる動きをさせることで、適切に子供の退避誘導を行うことができる。 As described above, the information processing system 1 creates a route plan for the small robot (second robot device 200) when the large robot (first robot device 100) determines the danger by performing the evacuation guidance process. You can issue commands, etc. As a result, the information processing system 1 can appropriately guide the child to evacuate by causing the small robot to evacuate when the child is under the vase and is in danger.
[1-7-5.計画更新処理の例]
 次に、図11を用いて、第1ロボット装置100の計画更新処理の例を示す。図11は、情報処理システムにおける計画更新処理の一例を示す図である。具体的には、図11は、情報処理システム1における計画更新処理の例を示す。なお、図11に示すステップ番号は、処理の説明を行うためのもの(符号)であり処理の順序を示すものではない。また、図7~図10と同様の処理については適宜説明を省略する。
[1-7-5. Example of plan update process]
Next, an example of the plan update process of the first robot device 100 will be shown with reference to FIG. FIG. 11 is a diagram showing an example of a plan update process in the information processing system. Specifically, FIG. 11 shows an example of the plan update process in the information processing system 1. The step numbers shown in FIG. 11 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIGS. 7 to 10 will be omitted as appropriate.
 図11に示すように、第2ロボット装置200は、カメラ画像を撮影する(ステップS601)。第2ロボット装置200は、情報を転送する(ステップS602)。第1ロボット装置100は、情報を受信する(ステップS603)。第1ロボット装置100は、小ロボットの自己位置を推定する(ステップS604)。 As shown in FIG. 11, the second robot device 200 captures a camera image (step S601). The second robot device 200 transfers information (step S602). The first robot device 100 receives the information (step S603). The first robot device 100 estimates the self-position of the small robot (step S604).
 第1ロボット装置100は、障害物の認識を行う(ステップS605)。例えば、第1ロボット装置100は、第2ロボット装置200から受信した情報を基に、障害物を認識する。 The first robot device 100 recognizes an obstacle (step S605). For example, the first robot device 100 recognizes an obstacle based on the information received from the second robot device 200.
 第1ロボット装置100は、危険度マップを更新する(ステップS606)。例えば、第1ロボット装置100は、第2ロボット装置200から受信した情報や認識した障害物の情報を基に、危険度マップを更新する。 The first robot device 100 updates the risk map (step S606). For example, the first robot device 100 updates the risk map based on the information received from the second robot device 200 and the recognized obstacle information.
 第1ロボット装置100は、カメラ画像を撮影する(ステップS607)。第1ロボット装置100は、大ロボットの自己位置を推定する(ステップS608)。第1ロボット装置100は、オブジェクトを認識する(ステップS609)。第1ロボット装置100は、危険度マップを更新したり(ステップS606)、大ロボットの位置制御を行ったりする(ステップS610)。 The first robot device 100 captures a camera image (step S607). The first robot device 100 estimates the self-position of the large robot (step S608). The first robot device 100 recognizes the object (step S609). The first robot device 100 updates the risk map (step S606) and controls the position of the large robot (step S610).
 また、第1ロボット装置100は、小ロボットの経路計画を行う(ステップS611)。例えば、第1ロボット装置100は、危険度マップを基に、第2ロボット装置200が障害物を避けて移動するように、第2ロボット装置200の経路計画を生成する。 Further, the first robot device 100 performs a route plan for the small robot (step S611). For example, the first robot device 100 generates a route plan for the second robot device 200 so that the second robot device 200 moves while avoiding obstacles based on the risk map.
 第1ロボット装置100は、情報を転送する(ステップS612)。第2ロボット装置200は、情報を受信する(ステップS613)。第2ロボット装置200は、受信した経路計画を基に自己位置を制御する(ステップS614)。 The first robot device 100 transfers information (step S612). The second robot device 200 receives the information (step S613). The second robot device 200 controls its own position based on the received route plan (step S614).
 第2ロボット装置200は、衝突判定を行う(ステップS615)。第2ロボット装置200は、経路計画に沿って移動した場合に衝突したり、所定の範囲以内に近接したりした物体があったかを判定する。第2ロボット装置200は、情報を転送する(ステップS602)。第2ロボット装置200は、経路計画に沿って移動した際に撮影したカメラ画像を転送する。例えば、第2ロボット装置200は、経路計画に沿って移動した場合に衝突したり、所定の範囲以内に近接したりした物体を示す情報を転送する。 The second robot device 200 makes a collision determination (step S615). The second robot device 200 determines whether or not there is an object that collides with the robot device 200 when it moves along the route plan or that is close to the object within a predetermined range. The second robot device 200 transfers information (step S602). The second robot device 200 transfers a camera image taken when the robot device 200 moves along the route plan. For example, the second robot device 200 transfers information indicating an object that collides with or approaches an object within a predetermined range when moving along a route plan.
 上記のように、情報処理システム1は、計画更新処理を行うことにより、小ロボット(第2ロボット装置200)の衝突判定を受けて、大ロボット(第1ロボット装置100)が危険度マップの更新と小ロボットの経路計画の更新を行うこと等ができる。これにより、情報処理システム1は、小ロボットが計画された経路を動いているときに障害物とぶつかった場合であっても、適切に危険度マップや小ロボットの経路を更新することができる。 As described above, the information processing system 1 receives the collision determination of the small robot (second robot device 200) by performing the plan update process, and the large robot (first robot device 100) updates the risk map. And the route plan of the small robot can be updated. As a result, the information processing system 1 can appropriately update the risk map and the route of the small robot even when the small robot collides with an obstacle while moving on the planned route.
[1-7-6.救出処理の例]
 次に、図12を用いて、第1ロボット装置100の救出処理の例を示す。図12は、情報処理システムにおける救出処理の一例を示す図である。具体的には、図12は、情報処理システム1における救出処理の例を示す。なお、図12に示すステップ番号は、処理の説明を行うためのもの(符号)であり処理の順序を示すものではない。また、図7~図11と同様の処理については適宜説明を省略する。
[1-7-6. Example of rescue processing]
Next, an example of the rescue process of the first robot device 100 is shown with reference to FIG. FIG. 12 is a diagram showing an example of rescue processing in an information processing system. Specifically, FIG. 12 shows an example of rescue processing in the information processing system 1. The step numbers shown in FIG. 12 are for explaining the processing (reference numerals) and do not indicate the order of the processing. Further, the same processing as in FIGS. 7 to 11 will be omitted as appropriate.
 図12に示すように、第2ロボット装置200は、カメラ画像を撮影する(ステップS701)。第2ロボット装置200は、加速度センサによる検知を行う(ステップS702)。例えば、第2ロボット装置200は、加速度センサにより第2ロボット装置200の加速度を検知する。 As shown in FIG. 12, the second robot device 200 captures a camera image (step S701). The second robot device 200 detects by the acceleration sensor (step S702). For example, the second robot device 200 detects the acceleration of the second robot device 200 by an acceleration sensor.
 第2ロボット装置200は、持ち上げ検知を行う(ステップS703)。第2ロボット装置200は、撮影したカメラ画像や加速度センサにより検知された情報を基に、第2ロボット装置200が持ち上げられたかの検知を行う。例えば、第2ロボット装置200は、所定の方向(例えば上方向)に所定の速度以上で移動した場合、第2ロボット装置200が持ち上げられたと判定する。 The second robot device 200 detects lifting (step S703). The second robot device 200 detects whether or not the second robot device 200 has been lifted based on the captured camera image and the information detected by the acceleration sensor. For example, when the second robot device 200 moves in a predetermined direction (for example, upward direction) at a predetermined speed or higher, the second robot device 200 determines that the second robot device 200 has been lifted.
 第2ロボット装置200は、情報を転送する(ステップS704)。第1ロボット装置100は、情報を受信する(ステップS705)。第1ロボット装置100は、小ロボットの自己位置を推定する(ステップS706)。 The second robot device 200 transfers information (step S704). The first robot device 100 receives the information (step S705). The first robot device 100 estimates the self-position of the small robot (step S706).
 第1ロボット装置100は、危険度マップを更新したり(ステップS707)、大ロボットの位置制御を行ったり(ステップS709)、小ロボットの経路計画を行ったりする(ステップS711)。 The first robot device 100 updates the risk map (step S707), controls the position of the large robot (step S709), and plans the route of the small robot (step S711).
 第1ロボット装置100は、カメラ画像を撮影する(ステップS708)。第1ロボット装置100は、大ロボットの位置制御を行う(ステップS709)。例えば、第1ロボット装置100は、第2ロボット装置200の位置へ移動する。 The first robot device 100 captures a camera image (step S708). The first robot device 100 controls the position of the large robot (step S709). For example, the first robot device 100 moves to the position of the second robot device 200.
 第1ロボット装置100は、小ロボットを救出する(ステップS710)。例えば、第1ロボット装置100は、第2ロボット装置200を所定の位置に運んだり、第2ロボット装置200を所定の姿勢に調整したりすることにより、第2ロボット装置200を救出する。 The first robot device 100 rescues the small robot (step S710). For example, the first robot device 100 rescues the second robot device 200 by carrying the second robot device 200 to a predetermined position or adjusting the second robot device 200 to a predetermined posture.
 また、第1ロボット装置100は、小ロボットの経路計画を行う(ステップS711)。第1ロボット装置100は、情報を転送する(ステップS712)。第2ロボット装置200は、情報を受信する。第2ロボット装置200は、受信した経路計画を基に自己位置を制御する(ステップS714)。第2ロボット装置200は、自己位置の制御に応じて移動するとともに、情報を転送する(ステップS704)。第2ロボット装置200は、経路計画に沿って移動した際に撮影したカメラ画像を転送する。 Further, the first robot device 100 performs a route plan for the small robot (step S711). The first robot device 100 transfers information (step S712). The second robot device 200 receives the information. The second robot device 200 controls its own position based on the received route plan (step S714). The second robot device 200 moves according to the control of its own position and transfers information (step S704). The second robot device 200 transfers a camera image taken when the robot device 200 moves along the route plan.
 上記のように、情報処理システム1は、救出処理を行うことにより、小ロボット(第2ロボット装置200)の持ち上げ検知結果と画像を基に、大ロボット(第1ロボット装置100)が小ロボットを探して地面に降ろすこと等ができる。これにより、情報処理システム1は、小ロボットが子供に持ち上げられてどこかに置かれてしまった場合であっても、適切に小ロボットを救出することができる。 As described above, in the information processing system 1, the large robot (first robot device 100) makes the small robot based on the lift detection result and the image of the small robot (second robot device 200) by performing the rescue process. You can find it and drop it on the ground. As a result, the information processing system 1 can appropriately rescue the small robot even if the small robot is lifted by a child and placed somewhere.
[1-8.監視対象の認識例]
 次に、図13を用いて、監視対象の認識例を示す。図13は、情報処理システムにおける監視対象の認識の一例を示す図である。なお、図13に示す各処理は、第1ロボット装置100や第2ロボット装置200等、情報処理システム1に含まれるいずれの装置が行ってもよい。
[1-8. Recognition example of monitoring target]
Next, an example of recognizing the monitoring target is shown with reference to FIG. FIG. 13 is a diagram showing an example of recognition of a monitoring target in an information processing system. Each process shown in FIG. 13 may be performed by any device included in the information processing system 1, such as the first robot device 100 and the second robot device 200.
 例えば、第2ロボット装置200が人やオブジェクトを認識する機能(例えば認識部232)を有する場合、第2ロボット装置200が人やオブジェクトの認識を行ってもよい。また、第2ロボット装置200が人やオブジェクトを認識する機能(例えば認識部232)を有しない場合、第1ロボット装置100が人やオブジェクトの認識を行ってもよい。例えば、第2ロボット装置200が顔の位置を推定する機能(例えば推定部233)を有する場合、第2ロボット装置200が顔の位置の推定を行ってもよい。また、第2ロボット装置200が顔の位置を推定する機能(例えば推定部233)を有しない場合、第1ロボット装置100が顔の位置の推定を行ってもよい。なお、第1ロボット装置100と第2ロボット装置200とが連携して各処理を行ってもよい。 For example, when the second robot device 200 has a function of recognizing a person or an object (for example, the recognition unit 232), the second robot device 200 may recognize the person or the object. Further, when the second robot device 200 does not have a function of recognizing a person or an object (for example, the recognition unit 232), the first robot device 100 may recognize the person or the object. For example, when the second robot device 200 has a function of estimating the position of the face (for example, the estimation unit 233), the second robot device 200 may estimate the position of the face. Further, when the second robot device 200 does not have a function of estimating the position of the face (for example, the estimation unit 233), the first robot device 100 may estimate the position of the face. The first robot device 100 and the second robot device 200 may cooperate with each other to perform each process.
 情報処理システム1は、人を認識する(ステップS21)。情報処理システム1は、画像に含まれる人を認識する。図13の例では、情報処理システム1は、画像IM21に含まれる人を認識する。情報処理システム1は、画像IM21に含まれる人を監視対象TGとして認識する。 The information processing system 1 recognizes a person (step S21). The information processing system 1 recognizes a person included in the image. In the example of FIG. 13, the information processing system 1 recognizes a person included in the image IM21. The information processing system 1 recognizes a person included in the image IM 21 as a monitoring target TG.
 そして、情報処理システム1は、顔の位置を推定する(ステップS22)。情報処理システム1は、画像中の人の顔が位置する領域を推定する。図13の例では、情報処理システム1は、画像IM21に含まれる人である監視対象TGの顔の位置を推定する。情報処理システム1は、画像IM21の中央上部の領域AR1に監視対象TGの顔FCが位置すると推定する。 Then, the information processing system 1 estimates the position of the face (step S22). The information processing system 1 estimates the area in which the human face is located in the image. In the example of FIG. 13, the information processing system 1 estimates the position of the face of the monitored TG, which is a person included in the image IM21. The information processing system 1 estimates that the face FC of the monitored TG is located in the region AR1 in the upper center of the image IM21.
 また、情報処理システム1は、オブジェクトを認識する(ステップS23)。情報処理システム1は、画像に含まれるオブジェクト(物体)を認識する。図13の例では、情報処理システム1は、画像IM21に含まれるオブジェクトOB21を認識する。また、情報処理システム1は、画像IM21の中央下部の領域AR2にオブジェクトOB21が位置すると認識する。情報処理システム1は、顔FCの領域AR1に重なる位置にオブジェクトOB21の領域AR2が位置すると認識する。 Further, the information processing system 1 recognizes the object (step S23). The information processing system 1 recognizes an object (object) included in the image. In the example of FIG. 13, the information processing system 1 recognizes the object OB21 included in the image IM21. Further, the information processing system 1 recognizes that the object OB21 is located in the region AR2 at the lower center of the image IM21. The information processing system 1 recognizes that the area AR2 of the object OB21 is located at a position overlapping the area AR1 of the face FC.
 このように、図13の例では、情報処理システム1は、人である監視対象TGの顔FCの付近にオブジェクトOB21が位置すると推定する。そのため、情報処理システム1は、オブジェクトOB21が誤飲の可能性がある物体である場合、監視対象TGがオブジェクトOB21を誤飲する危険があると判定する。そのため、情報処理システム1は、監視対象TGがオブジェクトOB21を誤飲する危険の発生を抑制する処理を実行する。 As described above, in the example of FIG. 13, the information processing system 1 estimates that the object OB21 is located near the face FC of the monitored TG who is a person. Therefore, when the object OB21 is an object that may be accidentally swallowed, the information processing system 1 determines that the monitored TG has a risk of accidentally swallowing the object OB21. Therefore, the information processing system 1 executes a process of suppressing the occurrence of a risk that the monitored TG accidentally swallows the object OB21.
 例えば、第1ロボット装置100は、操作部16により監視対象TGの手を把持し、監視対象TGの手の動きを抑制することで、監視対象TGがオブジェクトOB21を誤飲する危険の発生を抑制する。また、例えば、第1ロボット装置100は、操作部16により監視対象TGの手からオブジェクトOB21を取り除くことにより、監視対象TGがオブジェクトOB21を誤飲する危険の発生を抑制する。 For example, the first robot device 100 suppresses the risk of the monitored TG accidentally swallowing the object OB 21 by grasping the hand of the monitored TG by the operation unit 16 and suppressing the movement of the hand of the monitored TG. To do. Further, for example, the first robot device 100 removes the object OB21 from the hand of the monitored TG by the operation unit 16, thereby suppressing the occurrence of the risk that the monitored TG accidentally swallows the object OB21.
[1-9.マップの更新例]
 次に、図14~図16を用いて、マップの更新例を示す。図14は、情報処理システムにおけるオブジェクトの分類の一例を示す図である。図15は、情報処理システムにおける危険度マップの更新の一例を示す図である。図16は、情報処理システムにおける危険度マップの更新の概念図の一例を示す図である。なお、図1と同様の点については説明を省略する。
[1-9. Map update example]
Next, an example of updating the map is shown with reference to FIGS. 14 to 16. FIG. 14 is a diagram showing an example of classification of objects in an information processing system. FIG. 15 is a diagram showing an example of updating the risk map in the information processing system. FIG. 16 is a diagram showing an example of a conceptual diagram of updating a risk map in an information processing system. The same points as in FIG. 1 will not be described.
[1-9-1.オブジェクトの分類]
 まず、マップの更新の処理の説明に先立って、マップの更新に用いられるオブジェクトの分類について、図14を用いて、情報処理システム1におけるオブジェクトの分類について説明する。
[1-9-1. Object classification]
First, prior to the description of the map update process, the classification of the objects used for updating the map will be described with reference to FIG. 14, and the classification of the objects in the information processing system 1 will be described.
 情報処理システム1は、空間SP内に位置する各オブジェクトOB1~OB7をその属性に応じて分類する(ステップS31)。例えば、情報処理システム1は、各オブジェクトOB1~OB7を、そのオブジェクトの位置や姿勢(以下「配置態様」ともいう)の変更の容易性に応じて分類する。例えば、情報処理システム1は、各オブジェクトOB1~OB7を、配置態様の変更が困難なカテゴリ「CT1(Rigid)」と、配置態様の変更が容易なカテゴリ「CT2(Moving)」との2つのカテゴリに分類する。なお、配置態様の変更が容易なカテゴリ「CT2(Moving)」は、配置態様変更の容易さの度合いに応じて「CT2(相当容易)」、「CT3(容易)」、「CT4(やや容易)」など、複数のレベルに分類されてもよい。 The information processing system 1 classifies the objects OB1 to OB7 located in the space SP according to their attributes (step S31). For example, the information processing system 1 classifies the objects OB1 to OB7 according to the ease of changing the position and posture (hereinafter, also referred to as “arrangement mode”) of the objects. For example, the information processing system 1 has two categories of objects OB1 to OB7, a category "CT1 (Rigid)" in which it is difficult to change the arrangement mode and a category "CT2 (Moving)" in which the arrangement mode can be easily changed. Classify into. The category "CT2 (Moving)" in which the arrangement mode can be easily changed includes "CT2 (quite easy)", "CT3 (easy)", and "CT4 (slightly easy)" depending on the degree of ease of changing the arrangement mode. , Etc., may be classified into a plurality of levels.
 この場合、情報処理システム1は、カテゴリCT1に分類されたオブジェクトについては、配置態様の変更が困難なため、危険度マップ更新の際に、配置態様の更新が無いと推定してもよい。また、情報処理システム1は、カテゴリCT2に分類されたオブジェクトについては、配置態様の変更が容易なため、危険度マップ更新の際に、配置態様の更新が起こり得ると推定してもよい。そして、情報処理システム1は、カテゴリCT1に分類されたオブジェクトの配置態様を基に、カテゴリCT2に分類されたオブジェクトの配置態様の更新を行うことで、危険度マップを更新してもよい。 In this case, since it is difficult for the information processing system 1 to change the arrangement mode of the objects classified into the category CT1, it may be estimated that the arrangement mode is not updated when the risk map is updated. Further, since the information processing system 1 can easily change the arrangement mode of the objects classified into the category CT2, it may be estimated that the arrangement mode may be updated when the risk map is updated. Then, the information processing system 1 may update the risk map by updating the arrangement mode of the objects classified in the category CT2 based on the arrangement mode of the objects classified in the category CT1.
 なお、上記は一例であり、オブジェクトは、その属性により種々のカテゴリに分類されてもよく、カテゴリは3つ以上であってもよい。例えば、オブジェクトは、サイズ、重さ、硬さ、配置位置の高さ等の種々の属性に基づくカテゴリに分類されてもよい。また、例えば、情報処理システム1は、空間SP内に監視対象TGについて、他のオブジェクトOB1~OB7等とは異なるカテゴリ「CT0(Tracking)」として管理してもよい。 The above is an example, and the objects may be classified into various categories according to their attributes, and the number of categories may be three or more. For example, objects may be categorized based on various attributes such as size, weight, hardness, height of placement position, and the like. Further, for example, the information processing system 1 may manage the monitored TG in the space SP as a category "CT0 (Tracking)" different from other objects OB1 to OB7 and the like.
 情報処理システム1は、オブジェクトOB1をカテゴリCT1に分類する。情報処理システム1は、空間SPに設置された流し台であるオブジェクトOB1をカテゴリCT1に分類する。例えば、情報処理システム1は、画像や各オブジェクトの情報を用いて、各オブジェクトをカテゴリに分類する。 The information processing system 1 classifies the object OB1 into the category CT1. The information processing system 1 classifies the object OB1, which is a sink installed in the space SP, into the category CT1. For example, the information processing system 1 classifies each object into categories using images and information on each object.
 情報処理システム1は、オブジェクトOB2をカテゴリCT2に分類する。情報処理システム1は、空間SPに配置されたテーブルであるオブジェクトOB2をカテゴリCT2に分類する。また、情報処理システム1は、オブジェクトOB3をカテゴリCT2に分類する。情報処理システム1は、空間SPに配置されたソファーであるオブジェクトOB3をカテゴリCT2に分類する。 The information processing system 1 classifies the object OB2 into the category CT2. The information processing system 1 classifies the object OB2, which is a table arranged in the space SP, into the category CT2. Further, the information processing system 1 classifies the object OB3 into the category CT2. The information processing system 1 classifies the object OB3, which is a sofa arranged in the space SP, into the category CT2.
[1-9-2.危険度マップの更新]
 次に、図15及び図16を用いて危険度マップの更新の処理について説明する。まず、図15を用いて、危険度マップの更新を概念的に説明する。図15は、更新前後の危険度マップを示す。
[1-9-2. Update risk map]
Next, the process of updating the risk map will be described with reference to FIGS. 15 and 16. First, the update of the risk map will be conceptually described with reference to FIG. FIG. 15 shows a risk map before and after the update.
 情報処理システム1は、危険度マップMP1を更新する(ステップS41)。例えば、第1ロボット装置100は、収集した画像等を基に、危険度マップMP1を更新する。第1ロボット装置100は、画像センサ141により検知した画像や第2ロボット装置200から取得した画像等を基に、危険度マップMP1を更新する。図15の例では、情報処理システム1は、危険度マップMP1を危険度マップMP2に更新する。 The information processing system 1 updates the risk map MP1 (step S41). For example, the first robot device 100 updates the risk map MP1 based on the collected images and the like. The first robot device 100 updates the risk map MP1 based on an image detected by the image sensor 141, an image acquired from the second robot device 200, and the like. In the example of FIG. 15, the information processing system 1 updates the risk map MP1 to the risk map MP2.
 ここで、情報処理システム1は、カテゴリCT1に分類されたオブジェクトについては、配置態様の変更が困難なため、危険度マップ更新の際に、配置態様の更新が無いと推定する。また、情報処理システム1は、カテゴリCT2に分類されたオブジェクトについては、配置態様の変更が容易なため、危険度マップ更新の際に、配置態様の更新が起こり得ると推定する。そして、情報処理システム1は、カテゴリCT1に分類されたオブジェクトの配置態様を基に、カテゴリCT2に分類されたオブジェクトの配置態様の更新を行うことで、危険度マップを更新する。 Here, since it is difficult for the information processing system 1 to change the arrangement mode of the objects classified into the category CT1, it is estimated that the arrangement mode is not updated when the risk map is updated. Further, since the information processing system 1 can easily change the arrangement mode of the objects classified into the category CT2, it is estimated that the arrangement mode may be updated when the risk map is updated. Then, the information processing system 1 updates the risk map by updating the arrangement mode of the objects classified in the category CT2 based on the arrangement mode of the objects classified in the category CT1.
 情報処理システム1は、画像センサ141により検知した画像や第2ロボット装置200から取得した画像等を基に、各オブジェクトOB1~OB7等の配置態様を推定する。図15の例では、情報処理システム1は、オブジェクトOB2~OB5の配置態様が変更されたと推定する。例えば、情報処理システム1は、配置態様の変更が困難なカテゴリCT1に分類されたオブジェクト(オブジェクトOB1等)を基に、カテゴリCT2に属する各オブジェクトOB2~OB7等の配置態様を推定する。そして、情報処理システム1は、推定結果を基に、危険度マップMP1を危険度マップMP2に更新する。具体的には、情報処理システム1は、オブジェクトOB2、OB3が傾けられ、オブジェクトOB4、OB5の位置が移動したと推定し、危険度マップMP1を危険度マップMP2に更新する。このように、情報処理システム1は、ロボットの撮影画像からオブジェクトを分類して、危険度マップを更新する。 The information processing system 1 estimates the arrangement mode of the objects OB1 to OB7 and the like based on the image detected by the image sensor 141 and the image acquired from the second robot device 200. In the example of FIG. 15, it is estimated that the information processing system 1 has changed the arrangement mode of the objects OB2 to OB5. For example, the information processing system 1 estimates the arrangement mode of the objects OB2 to OB7 and the like belonging to the category CT2 based on the objects (object OB1 and the like) classified into the category CT1 whose arrangement mode is difficult to change. Then, the information processing system 1 updates the risk map MP1 to the risk map MP2 based on the estimation result. Specifically, the information processing system 1 estimates that the objects OB2 and OB3 are tilted and the positions of the objects OB4 and OB5 are moved, and updates the risk map MP1 to the risk map MP2. In this way, the information processing system 1 classifies the objects from the images captured by the robot and updates the risk map.
 情報処理システム1は、属性によりマップの更新率を変更してもよい。情報処理システム1は、配置態様の変更が可能な属性(カテゴリ)が「CT2(相当用意)」、「CT3(容易)」、「CT4(やや用意)」等の複数のレベルである場合、そのレベルに応じて、オブジェクトの配置態様の更新率を変更してもよい。この場合、情報処理システム1は、配置態様の変更が容易なカテゴリのオブジェクト程、配置態様の変更率を大きくしてもよい。例えば、情報処理システム1は、カテゴリCT2のオブジェクト程、配置態様の更新率を大きく、カテゴリCT4のオブジェクト程、配置態様の更新率を小さくしてもよい。なお、上記は一例であり、情報処理システム1は、種々の方法を適宜用いて、属性によりマップの更新率を変更してもよい。 The information processing system 1 may change the map update rate depending on the attribute. When the attribute (category) whose arrangement mode can be changed is a plurality of levels such as "CT2 (equivalently prepared)", "CT3 (easy)", and "CT4 (slightly prepared)", the information processing system 1 is used. Depending on the level, the update rate of the object arrangement mode may be changed. In this case, the information processing system 1 may increase the change rate of the arrangement mode for the objects in the category in which the arrangement mode can be easily changed. For example, in the information processing system 1, the object of the category CT2 may have a higher update rate of the arrangement mode, and the object of the category CT4 may have a lower update rate of the arrangement mode. The above is an example, and the information processing system 1 may change the map update rate according to the attributes by appropriately using various methods.
 次に、図16を用いて、危険マップの更新の処理の流れを説明する。図16は、危険度マップの更新の処理フローの一例を示す。なお、図16に示す各処理は、第1ロボット装置100や第2ロボット装置200等、情報処理システム1に含まれるいずれの装置が行ってもよい。また、図16に示す処理について、図1や図7~図15等と同様の点については適宜説明を省略する。また、図16に示すステップ番号は、処理の説明を行うためのもの(符号)であり処理の順序を示すものではない。 Next, the flow of processing for updating the danger map will be described with reference to FIG. FIG. 16 shows an example of the processing flow for updating the risk map. Each process shown in FIG. 16 may be performed by any device included in the information processing system 1, such as the first robot device 100 and the second robot device 200. Further, with respect to the processing shown in FIG. 16, the same points as those in FIGS. 1 and 7 to 15 and the like will be omitted as appropriate. Further, the step numbers shown in FIG. 16 are for explaining the processing (reference numerals) and do not indicate the order of the processing.
 情報処理システム1は、不可侵対象領域を指定する(ステップS801)。情報処理システム1は、情報処理システム1の管理者等から不可侵対象領域の指定を受け付けてもよいし、画像などを用いて自動で不可侵対象領域を指定してもよい。 The information processing system 1 specifies an inviolable target area (step S801). The information processing system 1 may accept the designation of the inviolable target area from the administrator of the information processing system 1 or the like, or may automatically specify the inviolable target area using an image or the like.
 情報処理システム1は、環境情報をマッピングする(ステップS802)。情報処理システム1は、センサ画像から環境情報を推定し、推定した環境情報をオブジェクトDBに登録する。例えば、情報処理システム1は、画像からオブジェクト等の配置態様の情報を推定し、推定したオブジェクトの情報をオブジェクトDBに登録する。 The information processing system 1 maps the environmental information (step S802). The information processing system 1 estimates the environmental information from the sensor image and registers the estimated environmental information in the object database. For example, the information processing system 1 estimates information on the arrangement mode of an object or the like from an image, and registers the estimated object information in an object DB.
 情報処理システム1は、センサ画像を取得する(ステップS803)。情報処理システム1は、第1ロボット装置100や第2ロボット装置200が検知した画像を取得する。 The information processing system 1 acquires a sensor image (step S803). The information processing system 1 acquires images detected by the first robot device 100 and the second robot device 200.
 情報処理システム1は、オブジェクトを分類する(ステップS804)。情報処理システム1は、センサ画像や、オブジェクトDBに登録された各オブジェクトの情報を用いて、オブジェクトを分類する。情報処理システム1は、オブジェクトをカテゴリCT1、CT2等に分類する。 The information processing system 1 classifies objects (step S804). The information processing system 1 classifies objects by using the sensor image and the information of each object registered in the object DB. The information processing system 1 classifies objects into categories CT1, CT2, and the like.
 情報処理システム1は、オブジェクトの位置を推定する(ステップS805)。そして、情報処理システム1は、危険度マップを更新する(ステップS806)。情報処理システム1は、推定したオブジェクトの位置を基に、危険度マップDBの危険度マップを更新する。 The information processing system 1 estimates the position of the object (step S805). Then, the information processing system 1 updates the risk level map (step S806). The information processing system 1 updates the risk map of the risk map DB based on the estimated position of the object.
 情報処理システム1は、対象の位置やポーズ(姿勢)を推定する(ステップS807)。例えば、情報処理システム1は、監視対象TGの位置や姿勢を推定する。 The information processing system 1 estimates the position and pose (posture) of the target (step S807). For example, the information processing system 1 estimates the position and orientation of the monitored TG.
 情報処理システム1は、禁止項目をチェックする(ステップS808)。情報処理システム1は、推定した監視対象TGの位置や姿勢や、危険度マップを基に、該当する禁止項目があるかをチェックする。情報処理システム1は、危険が発生する可能性があるため監視対象の行動を禁止する禁止項目があるかをチェックする。 The information processing system 1 checks prohibited items (step S808). The information processing system 1 checks whether there is a corresponding prohibited item based on the estimated position and posture of the monitored TG and the risk map. The information processing system 1 checks whether there is a prohibited item that prohibits the behavior to be monitored because a danger may occur.
 情報処理システム1は、侵入をブロックする(ステップS809)。情報処理システム1は、監視対象の行動を禁止する禁止項目が該当する場合、監視対象の危険エリアへの侵入をブロックする。 The information processing system 1 blocks intrusion (step S809). The information processing system 1 blocks the intrusion into the dangerous area of the monitoring target when the prohibited item for prohibiting the action of the monitoring target is applicable.
 情報処理システム1は、ロボットの自己位置を推定する(ステップS810)。情報処理システム1は、第1ロボット装置100や第2ロボット装置200ロボットの位置を推定する。 The information processing system 1 estimates the self-position of the robot (step S810). The information processing system 1 estimates the positions of the first robot device 100 and the second robot device 200 robot.
 そして、情報処理システム1は、ロボットの自己位置を更新する(ステップS811)。情報処理システム1は、第1ロボット装置100や第2ロボット装置200ロボットの位置を更新する。そして、情報処理システム1は、更新した第1ロボット装置100や第2ロボット装置200ロボットの位置を基に、ステップS809の処理を行う。 Then, the information processing system 1 updates the self-position of the robot (step S811). The information processing system 1 updates the positions of the first robot device 100 and the second robot device 200 robot. Then, the information processing system 1 performs the process of step S809 based on the updated positions of the first robot device 100 and the second robot device 200 robot.
 このように、情報処理システム1は、センサ画像から環境情報を推定してアップデートすることで、椅子等のオブジェクトの位置が変わるなどした場合でも危険度マップを更新可能となる。 In this way, the information processing system 1 can update the risk map even if the position of an object such as a chair changes by estimating the environmental information from the sensor image and updating it.
[2.その他の実施形態]
 上述した各実施形態に係る処理は、上記各実施形態以外にも種々の異なる形態(変形例)にて実施されてよい。
[2. Other embodiments]
The processing according to each of the above-described embodiments may be carried out in various different forms (modifications) other than each of the above-described embodiments.
[2-1.その他の構成例]
 例えば、上述した例では、1つの第1ロボット装置100と1つの第2ロボット装置200とによる監視対象の監視の例を示したが、複数の第2ロボット装置に対して1つの第1ロボット装置100が対応付けられてもよい。例えば、監視対象が複数ある場合、各監視対象に対して1つの第2ロボット装置200が追尾して監視するとともに、1つの第1ロボット装置100が各第2ロボット装置200からの情報を収集して、複数の監視対象全体を監視してもよい。このように、第2ロボット装置は、一の監視対象に対応づけられるが、第1ロボット装置100は、複数の監視対象に対応付けられてもよい。すなわち、情報処理システム1には、1つの第1ロボット装置100と、複数の第2ロボット装置200との組合せが含まれてもよい。
[2-1. Other configuration examples]
For example, in the above-described example, an example of monitoring a monitoring target by one first robot device 100 and one second robot device 200 is shown, but one first robot device is used for a plurality of second robot devices. 100 may be associated. For example, when there are a plurality of monitoring targets, one second robot device 200 tracks and monitors each monitoring target, and one first robot device 100 collects information from each second robot device 200. Therefore, the entire plurality of monitoring targets may be monitored. In this way, the second robot device is associated with one monitoring target, but the first robot device 100 may be associated with a plurality of monitoring targets. That is, the information processing system 1 may include a combination of one first robot device 100 and a plurality of second robot devices 200.
[2-2.その他]
 また、上記各実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した処理手順、具体的名称、各種のデータやパラメータを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。
[2-2. Others]
Further, among the processes described in each of the above embodiments, all or a part of the processes described as being automatically performed can be manually performed, or the processes described as being manually performed. It is also possible to automatically perform all or part of the above by a known method. In addition, the processing procedure, specific name, and information including various data and parameters shown in the above document and drawings can be arbitrarily changed unless otherwise specified. For example, the various information shown in each figure is not limited to the illustrated information.
 また、図示した各装置の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。 Further, each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or part of the device is functionally or physically distributed / physically in arbitrary units according to various loads and usage conditions. Can be integrated and configured.
 また、上述してきた各実施形態及び変形例は、処理内容を矛盾させない範囲で適宜組み合わせることが可能である。 Further, each of the above-described embodiments and modifications can be appropriately combined as long as the processing contents do not contradict each other.
 また、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、他の効果があってもよい。 Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
[3.本開示に係る効果]
 上述のように、本開示に係る情報処理システム(実施形態では情報処理システム1)は、移動手段(実施形態では移動部15)と、物体を操作する手段(実施形態では操作部16)とを有する第1ロボット(実施形態では第1ロボット装置100)と、移動手段(実施形態では移動部25)を有し、監視の対象となる監視対象を追尾する第2ロボット(実施形態では第2ロボット装置200)と、を備えた情報処理システムであって、第2ロボットは、画像センサ(実施形態では画像センサ241)により撮像した監視対象の画像を第1ロボットに送信し、第1ロボットは、第2ロボットから受信した監視対象の画像に基づいて、監視対象の行動に起因する危険を判定し、判定結果に基づいて、危険の発生を回避するための処理を実行する。
[3. Effect of this disclosure]
As described above, the information processing system (information processing system 1 in the embodiment) according to the present disclosure includes a moving means (moving unit 15 in the embodiment) and a means for operating an object (operation unit 16 in the embodiment). A second robot (the second robot in the embodiment) that has a first robot (first robot device 100 in the embodiment) and a moving means (moving unit 25 in the embodiment) and tracks a monitored object to be monitored. An information processing system including the device 200), wherein the second robot transmits an image of a monitoring target captured by an image sensor (image sensor 241 in the embodiment) to the first robot, and the first robot Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, a process for avoiding the occurrence of the danger is executed.
 これにより、本開示に係る情報処理システムは、監視対象を追尾する第2ロボットと、物体を操作する手段を有する第1ロボットとが連携して、監視対象の行動に起因する危険の発生を回避するための処理を実行することで、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 As a result, in the information processing system according to the present disclosure, the second robot that tracks the monitored object and the first robot that has the means to operate the object cooperate with each other to avoid the occurrence of danger due to the behavior of the monitored object. By executing the processing for this, it is possible to appropriately monitor the monitoring target without attaching a sensor to the structure in the space.
 また、監視対象は、子どもまたはペットである。これにより、情報処理システムは、子どもまたはペットを監視対象として、安全を適切に管理可能にすることができる。すなわち、情報処理システムは、第2ロボットと第1ロボットとが連携して、子どもまたはペットの行動に起因する危険の発生を回避するための処理を実行ことで、空間の構造物にセンサを取り付けることなく、子どもまたはペットの安全を適切に管理可能にすることができる。 Also, the monitoring target is children or pets. As a result, the information processing system can appropriately manage the safety of children or pets as monitoring targets. That is, in the information processing system, the second robot and the first robot cooperate with each other to perform a process for avoiding the occurrence of danger caused by the behavior of a child or a pet, thereby attaching a sensor to a structure in the space. Without having to be able to properly manage the safety of children or pets.
 また、監視対象は、屋内の居住環境に位置する。これにより、情報処理システムは、屋内の居住環境に位置する監視対象を適切に監視可能にすることができる。すなわち、情報処理システムは、第2ロボットと第1ロボットとが連携して、屋内の居住環境に位置する監視対象の行動に起因する危険の発生を回避するための処理を実行ことで、屋内の居住環境の構造物にセンサを取り付けることなく、屋内の居住環境に位置する監視対象を適切に監視可能にすることができる。 Also, the monitoring target is located in an indoor living environment. As a result, the information processing system can appropriately monitor the monitoring target located in the indoor living environment. That is, the information processing system cooperates with the second robot and the first robot to execute a process for avoiding the occurrence of danger due to the behavior of the monitored object located in the indoor living environment, thereby performing indoors. It is possible to appropriately monitor a monitoring target located in an indoor living environment without attaching a sensor to a structure in the living environment.
 また、第2ロボットは、大ロボットである第1ロボットよりもサイズが小さい小ロボットである。これにより、情報処理システムは、第1ロボットよりも小さい第2ロボットが監視対象を追尾することで、監視対象を追尾できない可能性を低減することができる。したがって、情報処理システムは、監視対象を見失う可能性を低減することができるため、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 The second robot is a small robot that is smaller in size than the first robot, which is a large robot. As a result, the information processing system can reduce the possibility that the second robot, which is smaller than the first robot, tracks the monitored object and thus cannot track the monitored object. Therefore, since the information processing system can reduce the possibility of losing sight of the monitored object, it is possible to appropriately monitor the monitored object without attaching a sensor to the structure in the space.
 また、第1ロボットは、監視対象の行動に起因して監視対象に及ぶ危険を判定する。これにより、情報処理システムは、監視対象の行動に起因して監視対象に及ぶ危険を判定することで、監視対象に及ぶ危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。このように、情報処理システムは、監視対象の安全を適切に管理可能にすることができる。 In addition, the first robot determines the danger to the monitored target due to the behavior of the monitored target. As a result, the information processing system can appropriately avoid the danger to the monitored target by determining the danger to the monitored target due to the behavior of the monitored target. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system can appropriately manage the safety of the monitored object.
 また、第1ロボットは、監視対象の行動に起因して監視対象以外に及ぶ危険を判定する。これにより、情報処理システムは、監視対象の行動に起因して監視対象以外に及ぶ危険を判定することで、監視対象以外に及ぶ危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。このように、情報処理システムは、監視対象以外の物体の安全を適切に管理可能にすることができる。 In addition, the first robot determines the danger that extends to other than the monitored target due to the behavior of the monitored target. As a result, the information processing system can appropriately avoid the danger that extends to the non-monitored target by determining the danger that extends to the non-monitored target due to the behavior of the monitored target. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system can appropriately manage the safety of objects other than those to be monitored.
 また、第1ロボットは、監視対象の物体に対する行動に起因する危険を判定する。これにより、情報処理システムは、監視対象の物体に対する行動に起因する危険を判定することで、監視対象の物体に対する行動に起因する危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 In addition, the first robot determines the danger caused by the action on the object to be monitored. As a result, the information processing system can appropriately avoid the danger caused by the behavior of the monitored object by determining the danger caused by the behavior of the monitored object. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、監視対象の物体への接触に起因する危険を判定する。これにより、情報処理システムは、監視対象の物体への接触に起因する危険を判定することで、監視対象の物体への接触に起因する危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 In addition, the first robot determines the danger caused by contact with the object to be monitored. As a result, the information processing system can appropriately avoid the danger caused by the contact with the monitored object by determining the danger caused by the contact with the monitored object. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、監視対象の物体の把持に起因する危険を判定する。これにより、情報処理システムは、監視対象の物体の把持に起因する危険を判定することで、監視対象の物体の把持に起因する危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 In addition, the first robot determines the danger caused by grasping the object to be monitored. As a result, the information processing system can appropriately avoid the danger caused by the gripping of the monitored object by determining the danger caused by the gripping of the monitored object. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、監視対象の位置の移動に起因する危険を判定する。これにより、情報処理システムは、監視対象の位置の移動に起因する危険を判定することで、監視対象の位置の移動に起因する危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 In addition, the first robot determines the danger caused by the movement of the position to be monitored. As a result, the information processing system can appropriately avoid the danger caused by the movement of the position of the monitoring target by determining the danger caused by the movement of the position of the monitoring target. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、危険の発生が予測されるエリアへの監視対象の侵入に起因する危険を判定する。これにより、情報処理システムは、危険の発生が予測されるエリアへの監視対象の侵入に起因する危険を判定することで、危険の発生が予測されるエリアへの監視対象の侵入に起因する危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 In addition, the first robot determines the danger caused by the intrusion of the monitored object into the area where the danger is predicted to occur. As a result, the information processing system determines the danger caused by the intrusion of the monitored target into the area where the danger is predicted to occur, and thereby the danger caused by the intrusion of the monitored target into the area where the danger is predicted to occur. Can be avoided appropriately. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、監視対象の行動に起因する危険の発生が予測されると判定した場合、操作手段による物体の操作を実行する。これにより、情報処理システムは、監視対象の行動に起因する危険を判定し、監視対象の行動に起因する危険の発生が予測されると判定した場合、操作手段による物体の操作を実行することで、操作手段の操作により危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 Further, when it is determined that the occurrence of danger due to the behavior of the monitored object is predicted, the first robot executes the operation of the object by the operating means. As a result, the information processing system determines the danger caused by the behavior of the monitored target, and when it is determined that the occurrence of the danger caused by the behavior of the monitored target is predicted, the information processing system executes the operation of the object by the operating means. , Danger can be avoided appropriately by operating the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、操作手段による監視対象に対する操作を実行する。これにより、情報処理システムは、操作手段による監視対象に対する操作を実行することで、操作手段の操作により危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 In addition, the first robot executes an operation on the monitored object by the operating means. As a result, the information processing system can appropriately avoid danger by operating the operating means by executing the operation on the monitored object by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、操作手段により監視対象を移動させる操作を実行する。これにより、情報処理システムは、操作手段により監視対象を移動させる操作を実行することで、操作手段の操作により危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 In addition, the first robot executes an operation of moving the monitoring target by the operating means. As a result, the information processing system can appropriately avoid danger by operating the operating means by executing the operation of moving the monitoring target by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、操作手段により監視対象を、危険の発生が予測されるエリアから退避させる操作を実行する。これにより、情報処理システムは、操作手段により監視対象を、危険の発生が予測されるエリアから退避させる操作を実行することで、操作手段の操作により危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。このように、情報処理システムは、危険の発生が予測されるエリアから監視対象を退避させることで、監視対象の安全を適切に管理可能にすることができる。 In addition, the first robot executes an operation of evacuating the monitored object from the area where the occurrence of danger is predicted by the operating means. As a result, the information processing system can appropriately avoid the danger by operating the operating means by executing the operation of evacuating the monitored object from the area where the occurrence of the danger is predicted by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system can appropriately manage the safety of the monitored target by evacuating the monitored target from the area where the occurrence of danger is predicted.
 また、第1ロボットは、操作手段により監視対象の行動を抑制する操作を実行する。これにより、情報処理システムは、操作手段により監視対象の行動を抑制する操作を実行することで、操作手段の操作により危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。このように、情報処理システムは、例えば、監視対象が危険な物体に近づいたり、触ったりしようとしている場合などにおいて、監視対象の行動を抑制することで、監視対象の安全を適切に管理可能にすることができる。 In addition, the first robot executes an operation of suppressing the behavior of the monitored object by the operating means. As a result, the information processing system can appropriately avoid danger by operating the operating means by executing an operation of suppressing the behavior of the monitored object by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system can appropriately manage the safety of the monitored object by suppressing the behavior of the monitored object, for example, when the monitored object is approaching or touching a dangerous object. can do.
 また、第1ロボットは、操作手段により監視対象の腕を把持する操作を実行する。これにより、情報処理システムは、操作手段により監視対象の腕を把持する操作を実行することで、操作手段の操作により危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。このように、情報処理システムは、例えば、監視対象が物体を誤飲する恐れがある場合などにおいて、監視対象の腕を把持し、物体を飲み込む行動を抑制することで、監視対象の安全を適切に管理可能にすることができる。 In addition, the first robot executes an operation of grasping the arm to be monitored by the operating means. As a result, the information processing system can appropriately avoid danger by operating the operating means by executing the operation of grasping the arm to be monitored by the operating means. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space. In this way, the information processing system appropriately secures the safety of the monitored object by grasping the arm of the monitored target and suppressing the behavior of swallowing the object, for example, when the monitored target may accidentally swallow the object. Can be made manageable.
 また、第1ロボットは、監視対象の行動に起因する危険の発生が予測されると判定した場合、第2ロボットに危険の発生を回避するための行動を指示する。これにより、情報処理システムは、第2ロボットに危険の発生を回避するための行動を行わせることより危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 Further, when the first robot determines that the occurrence of danger due to the behavior of the monitored object is predicted, the first robot instructs the second robot to take an action to avoid the occurrence of danger. As a result, the information processing system can appropriately avoid the danger by causing the second robot to take an action for avoiding the occurrence of the danger. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
 また、第1ロボットは、第2ロボットに、監視対象の注意を向けさせるための行動を指示する。これにより、情報処理システムは、第2ロボットに監視対象の注意を向けさせ、第2ロボットに危険の発生を回避するための行動を行わせることより危険を適切に回避することができる。したがって、情報処理システムは、空間の構造物にセンサを取り付けることなく、監視対象を適切に監視可能にすることができる。 In addition, the first robot instructs the second robot to take an action to draw the attention of the monitored object. As a result, the information processing system can appropriately avoid the danger by making the second robot pay attention to the monitored object and causing the second robot to take an action for avoiding the occurrence of the danger. Therefore, the information processing system can appropriately monitor the monitoring target without attaching the sensor to the structure in the space.
[4.ハードウェア構成]
 上述してきた各実施形態に係る第1ロボット装置100や第2ロボット装置200等の情報機器は、例えば図17に示すような構成のコンピュータ1000によって実現される。図17は、第1ロボット装置や第2ロボット装置等の情報処理装置の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。以下、実施形態に係る第1ロボット装置100を例に挙げて説明する。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェイス1500、及び入出力インターフェイス1600を有する。コンピュータ1000の各部は、バス1050によって接続される。
[4. Hardware configuration]
Information devices such as the first robot device 100 and the second robot device 200 according to each of the above-described embodiments are realized by, for example, a computer 1000 having a configuration as shown in FIG. FIG. 17 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of an information processing device such as a first robot device and a second robot device. Hereinafter, the first robot device 100 according to the embodiment will be described as an example. The computer 1000 includes a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600. Each part of the computer 1000 is connected by a bus 1050.
 CPU1100は、ROM1300又はHDD1400に格納されたプログラムに基づいて動作し、各部の制御を行う。例えば、CPU1100は、ROM1300又はHDD1400に格納されたプログラムをRAM1200に展開し、各種プログラムに対応した処理を実行する。 The CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200 and executes processing corresponding to various programs.
 ROM1300は、コンピュータ1000の起動時にCPU1100によって実行されるBIOS(Basic Input Output System)等のブートプログラムや、コンピュータ1000のハードウェアに依存するプログラム等を格納する。 The ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program that depends on the hardware of the computer 1000, and the like.
 HDD1400は、CPU1100によって実行されるプログラム、及び、かかるプログラムによって使用されるデータ等を非一時的に記録する、コンピュータが読み取り可能な記録媒体である。具体的には、HDD1400は、プログラムデータ1450の一例である本開示に係る情報処理プログラムを記録する記録媒体である。 The HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by the program. Specifically, the HDD 1400 is a recording medium for recording an information processing program according to the present disclosure, which is an example of program data 1450.
 通信インターフェイス1500は、コンピュータ1000が外部ネットワーク1550(例えばインターネット)と接続するためのインターフェイスである。例えば、CPU1100は、通信インターフェイス1500を介して、他の機器からデータを受信したり、CPU1100が生成したデータを他の機器へ送信したりする。 The communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet). For example, the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
 入出力インターフェイス1600は、入出力デバイス1650とコンピュータ1000とを接続するためのインターフェイスである。例えば、CPU1100は、入出力インターフェイス1600を介して、キーボードやマウス等の入力デバイスからデータを受信する。また、CPU1100は、入出力インターフェイス1600を介して、ディスプレイやスピーカーやプリンタ等の出力デバイスにデータを送信する。また、入出力インターフェイス1600は、所定の記録媒体(メディア)に記録されたプログラム等を読み取るメディアインターフェイスとして機能してもよい。メディアとは、例えばDVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto-Optical disk)等の光磁気記録媒体、テープ媒体、磁気記録媒体、または半導体メモリ等である。例えば、コンピュータ1000が実施形態に係る第1ロボット装置100として機能する場合、コンピュータ1000のCPU1100は、RAM1200上にロードされた情報処理プログラムを実行することにより、制御部13等の機能を実現する。また、HDD1400には、本開示に係る情報処理プログラムや、記憶部12内のデータが格納される。なお、CPU1100は、プログラムデータ1450をHDD1400から読み取って実行するが、他の例として、外部ネットワーク1550を介して、他の装置からこれらのプログラムを取得してもよい。 The input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000. For example, the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media). The media is, for example, an optical recording medium such as a DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk), a magneto-optical recording medium such as an MO (Magneto-Optical disk), a tape medium, a magnetic recording medium, or a semiconductor memory. Is. For example, when the computer 1000 functions as the first robot device 100 according to the embodiment, the CPU 1100 of the computer 1000 realizes the functions of the control unit 13 and the like by executing the information processing program loaded on the RAM 1200. Further, the information processing program according to the present disclosure and the data in the storage unit 12 are stored in the HDD 1400. The CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program, but as another example, these programs may be acquired from another device via the external network 1550.
 なお、本技術は以下のような構成も取ることができる。
(1)
 移動手段と、物体を操作する操作手段とを有する第1ロボットと、
 移動手段を有し、監視の対象となる監視対象を追尾する第2ロボットと、
 を備えた情報処理システムであって、
 前記第2ロボットは、
 画像センサにより撮像した前記監視対象の画像を前記第1ロボットに送信し、
 前記第1ロボットは、
 前記第2ロボットから受信した前記監視対象の画像に基づいて、前記監視対象の行動に起因する危険を判定し、判定結果に基づいて、前記危険の発生を回避するための処理を実行する
 情報処理システム。
(2)
 前記監視対象は、子どもまたはペットである
 (1)に記載の情報処理システム。
(3)
 前記監視対象は、屋内の居住環境に位置する
 (1)または(2)に記載の情報処理システム。
(4)
 前記第2ロボットは、
 大ロボットである前記第1ロボットよりもサイズが小さい小ロボットである
 (1)~(3)のいずれか1項に記載の情報処理システム。
(5)
 前記第1ロボットは、
 前記監視対象の行動に起因して前記監視対象に及ぶ前記危険を判定する
 (1)~(4)のいずれか1項に記載の情報処理システム。
(6)
 前記第1ロボットは、
 前記監視対象の行動に起因して前記監視対象以外に及ぶ前記危険を判定する
 (1)~(4)のいずれか1項に記載の情報処理システム。
(7)
 前記第1ロボットは、
 前記監視対象の物体に対する行動に起因する前記危険を判定する
 (1)~(6)のいずれか1項に記載の情報処理システム。
(8)
 前記第1ロボットは、
 前記監視対象の物体への接触に起因する前記危険を判定する
 (7)に記載の情報処理システム。
(9)
 前記第1ロボットは、
 前記監視対象の物体の把持に起因する前記危険を判定する
 (7)または(8)に記載の情報処理システム。
(10)
 前記第1ロボットは、
 前記監視対象の位置の移動に起因する前記危険を判定する
 (1)~(9)のいずれか1項に記載の情報処理システム。
(11)
 前記第1ロボットは、
 前記危険の発生が予測されるエリアへの前記監視対象の侵入に起因する前記危険を判定する
 (10)に記載の情報処理システム。
(12)
 前記第1ロボットは、
 前記監視対象の行動に起因する前記危険の発生が予測されると判定した場合、前記操作手段による物体の操作を実行する
 (1)~(11)のいずれか1項に記載の情報処理システム。
(13)
 前記第1ロボットは、
 前記操作手段による前記監視対象に対する操作を実行する
 (12)に記載の情報処理システム。
(14)
 前記第1ロボットは、
 前記操作手段により前記監視対象を移動させる操作を実行する
 (13)に記載の情報処理システム。
(15)
 前記第1ロボットは、
 前記操作手段により前記監視対象を、前記危険の発生が予測されるエリアから退避させる操作を実行する
 (14)に記載の情報処理システム。
(16)
 前記第1ロボットは、
 前記操作手段により前記監視対象の行動を抑制する操作を実行する
 (13)に記載の情報処理システム。
(17)
 前記第1ロボットは、
 前記操作手段により前記監視対象の腕を把持する操作を実行する
 (16)に記載の情報処理システム。
(18)
 前記第1ロボットは、
 前記監視対象の行動に起因する前記危険の発生が予測されると判定した場合、前記第2ロボットに前記危険の発生を回避するための行動を指示する
 (1)~(17)のいずれか1項に記載の情報処理システム。
(19)
 前記第1ロボットは、
 前記第2ロボットに、前記監視対象の注意を向けさせるための行動を指示する
 (18)に記載の情報処理システム。
(20)
 前記第1ロボットは、
 前記第2ロボットに、音声出力するように指示する
 (19)に記載の情報処理システム。
(21)
 前記第1ロボットは、
 前記第2ロボットに、前記監視対象の視野内に位置するように指示する
 (19)または(20)に記載の情報処理システム。
(22)
 前記第1ロボットは、
 自己位置を推定する
 (1)~(21)のいずれか1項に記載の情報処理システム。
(23)
 前記第1ロボットは、
 オブジェクトを認識する
 (1)~(22)のいずれか1項に記載の情報処理システム。
(24)
 前記第1ロボットは、
 前記危険に関連する危険度マップを生成する
 (1)~(23)のいずれか1項に記載の情報処理システム。
(25)
 前記第1ロボットは、
 前記第2ロボットの位置をマッピングする
 (1)~(24)のいずれか1項に記載の情報処理システム。
(26)
 前記第2ロボットは、
 顔認識機能により、前記監視対象の顔を認識する
 (1)~(25)のいずれか1項に記載の情報処理システム。
(27)
 前記第2ロボットは、
 人認識機能により、前記監視対象を人として認識する
 (1)~(26)のいずれか1項に記載の情報処理システム。
(28)
 前記第2ロボットは、
 オブジェクトを認識する
 (1)~(27)のいずれか1項に記載の情報処理システム。
(29)
 複数の監視対象の各々を追尾する複数の第2ロボット、
 を備え、
 前記第1ロボットは、
 前記複数の第2ロボットの各々から受信した前記監視対象の画像に基づいて、前記監視対象の行動に起因する前記危険を判定し、判定結果に基づいて、前記危険の発生を回避するための処理を実行する
 (1)~(28)のいずれか1項に記載の情報処理システム。
(30)
 移動手段と、物体を操作する操作手段とを有する第1ロボットと、
 移動手段を有し、監視の対象となる監視対象を追尾する第2ロボットと、
 が実行する情報処理方法であって、
 前記第2ロボットは、
 画像センサにより撮像した前記監視対象の画像を前記第1ロボットに送信し、
 前記第1ロボットは、
 前記第2ロボットから受信した前記監視対象の画像に基づいて、前記監視対象の行動に起因する危険を判定し、判定結果に基づいて、前記危険の発生を回避するための処理を実行する
 処理を実行する情報処理方法。
The present technology can also have the following configurations.
(1)
A first robot having a moving means and an operating means for manipulating an object,
A second robot that has a means of transportation and tracks the monitored object to be monitored,
It is an information processing system equipped with
The second robot
The image of the monitoring target captured by the image sensor is transmitted to the first robot, and the image is transmitted to the first robot.
The first robot
Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, processing for avoiding the occurrence of the danger is executed. Information processing. system.
(2)
The information processing system according to (1), wherein the monitoring target is a child or a pet.
(3)
The information processing system according to (1) or (2), wherein the monitoring target is located in an indoor living environment.
(4)
The second robot
The information processing system according to any one of (1) to (3), which is a small robot having a size smaller than that of the first robot, which is a large robot.
(5)
The first robot
The information processing system according to any one of (1) to (4), which determines the danger to the monitored object due to the behavior of the monitored object.
(6)
The first robot
The information processing system according to any one of (1) to (4), which determines the danger extending to other than the monitored target due to the behavior of the monitored target.
(7)
The first robot
The information processing system according to any one of (1) to (6), which determines the danger caused by the action on the object to be monitored.
(8)
The first robot
The information processing system according to (7), which determines the danger caused by contact with the object to be monitored.
(9)
The first robot
The information processing system according to (7) or (8), which determines the danger caused by grasping the object to be monitored.
(10)
The first robot
The information processing system according to any one of (1) to (9) for determining the danger caused by the movement of the position of the monitoring target.
(11)
The first robot
The information processing system according to (10), wherein the information processing system determines the danger caused by the intrusion of the monitoring target into an area where the occurrence of the danger is predicted.
(12)
The first robot
The information processing system according to any one of (1) to (11), which executes an operation of an object by the operating means when it is determined that the occurrence of the danger due to the behavior of the monitored object is predicted.
(13)
The first robot
The information processing system according to (12), which executes an operation on the monitored object by the operating means.
(14)
The first robot
The information processing system according to (13), which executes an operation of moving the monitored object by the operating means.
(15)
The first robot
The information processing system according to (14), wherein the operation means is used to evacuate the monitored object from an area where the danger is predicted to occur.
(16)
The first robot
The information processing system according to (13), wherein an operation of suppressing the behavior of the monitored object is executed by the operating means.
(17)
The first robot
The information processing system according to (16), wherein the operation of gripping the arm to be monitored is executed by the operating means.
(18)
The first robot
When it is determined that the occurrence of the danger due to the behavior of the monitoring target is predicted, the second robot is instructed to take an action to avoid the occurrence of the danger (1) to (17). The information processing system described in the section.
(19)
The first robot
The information processing system according to (18), wherein the second robot is instructed to take an action for directing the attention of the monitored object.
(20)
The first robot
The information processing system according to (19), wherein the second robot is instructed to output voice.
(21)
The first robot
The information processing system according to (19) or (20), wherein the second robot is instructed to be located within the field of view of the monitored object.
(22)
The first robot
The information processing system according to any one of (1) to (21) for estimating a self-position.
(23)
The first robot
The information processing system according to any one of (1) to (22) for recognizing an object.
(24)
The first robot
The information processing system according to any one of (1) to (23), which generates a risk map related to the danger.
(25)
The first robot
The information processing system according to any one of (1) to (24), which maps the position of the second robot.
(26)
The second robot
The information processing system according to any one of (1) to (25), which recognizes the face to be monitored by the face recognition function.
(27)
The second robot
The information processing system according to any one of (1) to (26), which recognizes the monitored object as a person by the person recognition function.
(28)
The second robot
The information processing system according to any one of (1) to (27) for recognizing an object.
(29)
Multiple second robots that track each of the multiple monitoring targets,
With
The first robot
A process for determining the danger caused by the behavior of the monitoring target based on the images of the monitoring target received from each of the plurality of second robots, and avoiding the occurrence of the danger based on the determination result. The information processing system according to any one of (1) to (28).
(30)
A first robot having a moving means and an operating means for manipulating an object,
A second robot that has a means of transportation and tracks the monitored object to be monitored,
Is the information processing method that
The second robot
The image of the monitoring target captured by the image sensor is transmitted to the first robot, and the image is transmitted to the first robot.
The first robot
Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, a process for avoiding the occurrence of the danger is executed. Information processing method to be executed.
 1 情報処理システム
 100 第1ロボット装置
 11 通信部
 12 記憶部
 121 マップ情報記憶部
 122 オブジェクト情報記憶部
 123 危険判定用情報記憶部
 13 制御部
 131 取得部
 132 認識部
 133 生成部
 134 推定部
 135 判定部
 136 計画部
 137 実行部
 14 センサ部
 141 画像センサ
 15 移動部
 16 操作部
 200 第2ロボット装置
 21 通信部
 22 記憶部
 23 制御部
 231 取得部
 232 認識部
 233 推定部
 234 判定部
 235 送信部
 236 実行部
 24 センサ部
 241 画像センサ
 25 移動部
 27 出力部
1 Information processing system 100 1st robot device 11 Communication unit 12 Storage unit 121 Map information storage unit 122 Object information storage unit 123 Danger judgment information storage unit 13 Control unit 131 Acquisition unit 132 Recognition unit 133 Generation unit 134 Estimating unit 135 Judgment unit 136 Planning unit 137 Execution unit 14 Sensor unit 141 Image sensor 15 Moving unit 16 Operation unit 200 2nd robot device 21 Communication unit 22 Storage unit 23 Control unit 231 Acquisition unit 232 Recognition unit 233 Estimating unit 234 Judgment unit 235 Transmission unit 236 Execution unit 24 Sensor unit 241 Image sensor 25 Moving unit 27 Output unit

Claims (20)

  1.  移動手段と、物体を操作する操作手段とを有する第1ロボットと、
     移動手段を有し、監視の対象となる監視対象を追尾する第2ロボットと、
     を備えた情報処理システムであって、
     前記第2ロボットは、
     画像センサにより撮像した前記監視対象の画像を前記第1ロボットに送信し、
     前記第1ロボットは、
     前記第2ロボットから受信した前記監視対象の画像に基づいて、前記監視対象の行動に起因する危険を判定し、判定結果に基づいて、前記危険の発生を回避するための処理を実行する
     情報処理システム。
    A first robot having a moving means and an operating means for manipulating an object,
    A second robot that has a means of transportation and tracks the monitored object to be monitored,
    It is an information processing system equipped with
    The second robot
    The image of the monitoring target captured by the image sensor is transmitted to the first robot, and the image is transmitted to the first robot.
    The first robot
    Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, processing for avoiding the occurrence of the danger is executed. Information processing. system.
  2.  前記監視対象は、子どもまたはペットである
     請求項1に記載の情報処理システム。
    The information processing system according to claim 1, wherein the monitoring target is a child or a pet.
  3.  前記監視対象は、屋内の居住環境に位置する
     請求項1に記載の情報処理システム。
    The information processing system according to claim 1, wherein the monitoring target is located in an indoor living environment.
  4.  前記第2ロボットは、
     大ロボットである前記第1ロボットよりもサイズが小さい小ロボットである
     請求項1に記載の情報処理システム。
    The second robot
    The information processing system according to claim 1, which is a small robot having a size smaller than that of the first robot, which is a large robot.
  5.  前記第1ロボットは、
     前記監視対象の行動に起因して前記監視対象に及ぶ前記危険を判定する
     請求項1に記載の情報処理システム。
    The first robot
    The information processing system according to claim 1, wherein the risk of the monitored object is determined due to the behavior of the monitored object.
  6.  前記第1ロボットは、
     前記監視対象の行動に起因して前記監視対象以外に及ぶ前記危険を判定する
     請求項1に記載の情報処理システム。
    The first robot
    The information processing system according to claim 1, wherein the danger is determined to extend to a range other than the monitored target due to the behavior of the monitored target.
  7.  前記第1ロボットは、
     前記監視対象の物体に対する行動に起因する前記危険を判定する
     請求項1に記載の情報処理システム。
    The first robot
    The information processing system according to claim 1, wherein the danger caused by the action on the object to be monitored is determined.
  8.  前記第1ロボットは、
     前記監視対象の物体への接触に起因する前記危険を判定する
     請求項7に記載の情報処理システム。
    The first robot
    The information processing system according to claim 7, wherein the danger caused by contact with the object to be monitored is determined.
  9.  前記第1ロボットは、
     前記監視対象の物体の把持に起因する前記危険を判定する
     請求項7に記載の情報処理システム。
    The first robot
    The information processing system according to claim 7, wherein the danger caused by grasping the object to be monitored is determined.
  10.  前記第1ロボットは、
     前記監視対象の位置の移動に起因する前記危険を判定する
     請求項1に記載の情報処理システム。
    The first robot
    The information processing system according to claim 1, wherein the danger caused by the movement of the position of the monitoring target is determined.
  11.  前記第1ロボットは、
     前記危険の発生が予測されるエリアへの前記監視対象の侵入に起因する前記危険を判定する
     請求項10に記載の情報処理システム。
    The first robot
    The information processing system according to claim 10, wherein the information processing system determines the danger caused by the intrusion of the monitoring target into an area where the occurrence of the danger is predicted.
  12.  前記第1ロボットは、
     前記監視対象の行動に起因する前記危険の発生が予測されると判定した場合、操作手段による物体の操作を実行する
     請求項1に記載の情報処理システム。
    The first robot
    The information processing system according to claim 1, wherein when it is determined that the occurrence of the danger due to the behavior of the monitored object is predicted, the operation of the object by the operating means is executed.
  13.  前記第1ロボットは、
     前記操作手段による前記監視対象に対する操作を実行する
     請求項12に記載の情報処理システム。
    The first robot
    The information processing system according to claim 12, wherein an operation on the monitored object by the operating means is executed.
  14.  前記第1ロボットは、
     前記操作手段により前記監視対象を移動させる操作を実行する
     請求項13に記載の情報処理システム。
    The first robot
    The information processing system according to claim 13, wherein the operation of moving the monitoring target is executed by the operation means.
  15.  前記第1ロボットは、
     前記操作手段により前記監視対象を、前記危険の発生が予測されるエリアから退避させる操作を実行する
     請求項14に記載の情報処理システム。
    The first robot
    The information processing system according to claim 14, wherein the operation means is used to evacuate the monitored object from an area where the danger is predicted to occur.
  16.  前記第1ロボットは、
     前記操作手段により前記監視対象の行動を抑制する操作を実行する
     請求項13に記載の情報処理システム。
    The first robot
    The information processing system according to claim 13, wherein an operation of suppressing the behavior of the monitored object is executed by the operating means.
  17.  前記第1ロボットは、
     前記操作手段により前記監視対象の腕を把持する操作を実行する
     請求項16に記載の情報処理システム。
    The first robot
    The information processing system according to claim 16, wherein the operation of gripping the arm to be monitored is executed by the operating means.
  18.  前記第1ロボットは、
     前記監視対象の行動に起因する前記危険の発生が予測されると判定した場合、前記第2ロボットに前記危険の発生を回避するための行動を指示する
     請求項1に記載の情報処理システム。
    The first robot
    The information processing system according to claim 1, wherein when it is determined that the occurrence of the danger due to the behavior of the monitoring target is predicted, the second robot is instructed to take an action for avoiding the occurrence of the danger.
  19.  前記第1ロボットは、
     前記第2ロボットに、前記監視対象の注意を向けさせるための行動を指示する
     請求項18に記載の情報処理システム。
    The first robot
    The information processing system according to claim 18, wherein the second robot is instructed to take an action for directing the attention of the monitored object.
  20.  移動手段と、物体を操作する操作手段とを有する第1ロボットと、
     移動手段を有し、監視の対象となる監視対象を追尾する第2ロボットと、
     が実行する情報処理方法であって、
     前記第2ロボットは、
     画像センサにより撮像した前記監視対象の画像を前記第1ロボットに送信し、
     前記第1ロボットは、
     前記第2ロボットから受信した前記監視対象の画像に基づいて、前記監視対象の行動に起因する危険を判定し、判定結果に基づいて、前記危険の発生を回避するための処理を実行する
     処理を実行する情報処理方法。
    A first robot having a moving means and an operating means for manipulating an object,
    A second robot that has a means of transportation and tracks the monitored object to be monitored,
    Is the information processing method that
    The second robot
    The image of the monitoring target captured by the image sensor is transmitted to the first robot, and the image is transmitted to the first robot.
    The first robot
    Based on the image of the monitoring target received from the second robot, the danger caused by the behavior of the monitoring target is determined, and based on the determination result, a process for avoiding the occurrence of the danger is executed. Information processing method to be executed.
PCT/JP2020/032533 2019-09-06 2020-08-28 Information processing system and information processing method WO2021044953A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-162565 2019-09-06
JP2019162565 2019-09-06

Publications (1)

Publication Number Publication Date
WO2021044953A1 true WO2021044953A1 (en) 2021-03-11

Family

ID=74852542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/032533 WO2021044953A1 (en) 2019-09-06 2020-08-28 Information processing system and information processing method

Country Status (1)

Country Link
WO (1) WO2021044953A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113925396A (en) * 2021-10-29 2022-01-14 青岛海尔智能技术研发有限公司 Method and device for cleaning floor, storage medium
CN113925396B (en) * 2021-10-29 2024-05-31 青岛海尔科技有限公司 Method and device for floor cleaning and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014176963A (en) * 2013-03-14 2014-09-25 Toyota Motor Engineering & Manufacturing North America Inc Computer-based method and system for providing active and automatic personal assistance using robotic device/platform
CN204700889U (en) * 2015-06-19 2015-10-14 磁石网络科技(长沙)有限公司 Intelligent robot
CN106003047A (en) * 2016-06-28 2016-10-12 北京光年无限科技有限公司 Danger early warning method and device for intelligent robot
CN108710819A (en) * 2018-03-28 2018-10-26 上海乐愚智能科技有限公司 A kind of method, apparatus to eliminate safe hidden trouble, storage medium and robot
US20180317725A1 (en) * 2015-10-27 2018-11-08 Samsung Electronics Co., Ltd Cleaning robot and method for controlling same
CN109309813A (en) * 2018-10-22 2019-02-05 北方工业大学 Intelligent following method suitable for indoor environment and intelligent following robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014176963A (en) * 2013-03-14 2014-09-25 Toyota Motor Engineering & Manufacturing North America Inc Computer-based method and system for providing active and automatic personal assistance using robotic device/platform
CN204700889U (en) * 2015-06-19 2015-10-14 磁石网络科技(长沙)有限公司 Intelligent robot
US20180317725A1 (en) * 2015-10-27 2018-11-08 Samsung Electronics Co., Ltd Cleaning robot and method for controlling same
CN106003047A (en) * 2016-06-28 2016-10-12 北京光年无限科技有限公司 Danger early warning method and device for intelligent robot
CN108710819A (en) * 2018-03-28 2018-10-26 上海乐愚智能科技有限公司 A kind of method, apparatus to eliminate safe hidden trouble, storage medium and robot
CN109309813A (en) * 2018-10-22 2019-02-05 北方工业大学 Intelligent following method suitable for indoor environment and intelligent following robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113925396A (en) * 2021-10-29 2022-01-14 青岛海尔智能技术研发有限公司 Method and device for cleaning floor, storage medium
CN113925396B (en) * 2021-10-29 2024-05-31 青岛海尔科技有限公司 Method and device for floor cleaning and storage medium

Similar Documents

Publication Publication Date Title
KR102235003B1 (en) Collision detection, estimation and avoidance
KR102348041B1 (en) Control method of robot system including a plurality of moving robots
Liu et al. Multirobot cooperative learning for semiautonomous control in urban search and rescue applications
KR102662949B1 (en) Artificial intelligence Moving robot and control method thereof
US20210346557A1 (en) Robotic social interaction
CN112367887B (en) Multiple robot cleaner and control method thereof
JP5324286B2 (en) Network robot system, robot control apparatus, robot control method, and robot control program
CN113226667A (en) Cleaning robot and method for performing task thereof
US20190004520A1 (en) Autonomous movement device, autonomous movement method and program recording medium
WO2019116643A1 (en) Information processing device and information processing method
KR20180023301A (en) Moving robot and control method thereof
JP2021064067A (en) Apparatus, information processing method, program, information processing system, and method of information processing system
JP2022118200A (en) Mobile device, method for controlling mobile device and program
KR20190007632A (en) Carrying drone that recognizes object location by constructing three-dimensional map
WO2021044953A1 (en) Information processing system and information processing method
JP5552710B2 (en) Robot movement control system, robot movement control program, and robot movement control method
US11986959B2 (en) Information processing device, action decision method and program
JP2020040839A (en) Remote monitoring system for elevator
JP2020004182A (en) Robot, robot control program and robot control method
JP2010205015A (en) Group behavior estimation device and service provision system
JP7351757B2 (en) How to control a moving robot
JP6667144B2 (en) Elevator remote monitoring system
WO2021177043A1 (en) Information processing device, information processing method, and program
US20240160212A1 (en) Object enrollment in a robotic cart coordination system
JP5567725B2 (en) Group behavior estimation device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20859948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20859948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP