WO2024080210A1 - Article moving device and control method for same - Google Patents

Article moving device and control method for same Download PDF

Info

Publication number
WO2024080210A1
WO2024080210A1 PCT/JP2023/036275 JP2023036275W WO2024080210A1 WO 2024080210 A1 WO2024080210 A1 WO 2024080210A1 JP 2023036275 W JP2023036275 W JP 2023036275W WO 2024080210 A1 WO2024080210 A1 WO 2024080210A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving device
item
unit
information
article
Prior art date
Application number
PCT/JP2023/036275
Other languages
French (fr)
Japanese (ja)
Inventor
パーベル サフキン
シゲマツ ユキトシ ミナミ
Original Assignee
Telexistence株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telexistence株式会社 filed Critical Telexistence株式会社
Publication of WO2024080210A1 publication Critical patent/WO2024080210A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices

Definitions

  • the present invention relates to an object moving device and a control method thereof.
  • an item moving device equipped with a robot arm is used to transfer items such as packaged goods loaded on a pallet onto a cart or to move them sequentially onto a conveyer belt.
  • the item moving device In order for the item moving device to perform appropriate operations on items packed in packaging boxes, pallets, carts, conveyer belts, etc. (hereinafter collectively referred to as "objects"), it is preferable for the item moving device to recognize the attributes, position, orientation, etc. of these objects.
  • Patent Document 1 proposes a method for assigning attribute information to an object, which includes generating a virtual world that reproduces a real-world environment, obtaining a model in the virtual world that corresponds to the object to which attribute information is to be assigned, and assigning attribute information to the model, the attribute information including information about at least one part of the model.
  • Patent Document 2 also proposes a method for identifying the position and orientation of an object, which includes generating a virtual world that includes a display of the object in the real world, displaying a model that corresponds to the object in the virtual world, overlaying the model on the object in the virtual world, and comparing the object and the model to identify the position and orientation of the object.
  • Patent Document 1 provides a means for assigning attribute information and information regarding the position and orientation of an object all at once through a series of operations.
  • Patent Documents 1 and 2 make it possible to obtain information about the attributes, positions, and attitudes of objects such as items, pallets, carts, and conveyor belts that exist around an item moving device in a logistics warehouse or the like, Patent Documents 1 and 2 do not provide any means for determining whether those objects that exist around the item moving device are positioned so that the item moving device can perform an item moving operation (for example, the operation of transferring items from a pallet to a cart).
  • the positions of the item moving device and the objects placed around it are not fixed, but may be changed as appropriate depending on the moving work.
  • the positions of the item moving device and the various objects around it are changed in this way, the item moving work cannot be performed by the item moving device unless the changed positions of those objects are located in positions where the item moving work can be performed by the item moving device.
  • the object of one aspect of the present disclosure is to provide a means for easily assigning annotation information regarding the attributes, position, orientation, and dimensions of objects placed around an item moving device to the objects.
  • the object of another aspect of the present disclosure is to provide a means for determining whether an object is located at a position where an item moving operation can be performed by the item moving device.
  • an object moving device for moving an object
  • the object moving device including an arm unit having a holding unit for holding an object, an acquisition unit for acquiring ambient environment information of the object moving device, and a control unit for controlling the operation of the holding unit, the arm unit, and the acquisition unit.
  • the control unit is configured to: cause the acquisition unit to acquire ambient environment information of the object moving device in the real world; generate a virtual space including objects present around the object moving device in the real world based on the ambient environment information; assign annotation information including attribute information of the object and information regarding its position, orientation, and size to the object in the virtual world; accept a first input for specifying an object to be moved and a second input for specifying a position or area to which the object is to be moved, for the object present in the virtual world; and cause the holding unit and the arm unit to perform an operation of moving the specified object to a specified position or area in the real world based on the first and second inputs.
  • control unit is further configured to determine whether the object specified in the first and second inputs is within an accessible range of the item moving device, and causes the holding unit and the arm unit to perform an operation to move the specified item to a specified position or area when the specified object is within the accessible range.
  • FIG. 1 is a schematic plan view showing an article moving device and objects arranged around it in a warehouse.
  • FIG. 1 is a schematic front view showing an article moving device and an object in a warehouse.
  • 1 is a side view showing a schematic configuration of an article moving device according to an embodiment of the present invention.
  • 1 is a block diagram showing a configuration of an article moving device according to an embodiment of the present invention.
  • 11 is a diagram showing an example of a UI screen for selecting an object type displayed on a display unit.
  • FIG. 11A and 11B are diagrams showing how the position, orientation, and dimensions of a model are changed in the virtual world displayed on the display unit in response to an operation input from a user via an operation unit.
  • 4 is a flowchart showing an article moving operation by the article moving device.
  • FIG. 8A to 8C are diagrams illustrating a first operation and a second operation in step S11 shown in FIG. 7.
  • FIG. 13 is a diagram showing an example of a state in which a position or area to which an item is to be moved has been specified.
  • FIG. 13 is a diagram showing a modified example of the article moving device in the present embodiment.
  • 13A and 13B are diagrams illustrating another modified example of the article moving device in the present embodiment.
  • Fig. 1 is a schematic plan view showing an item moving device in a warehouse and objects arranged around it (items such as products packed in packing boxes, pallets, carts, conveyor belts, etc.).
  • Fig. 2 is a schematic front view showing the item moving device and objects in a warehouse.
  • an item moving device 100 equipped with a robot arm 110, a pallet 20 on which items 10, such as goods packed in packing boxes, are placed, a cart 30 for a worker to transport the items 10, and a belt conveyor 40 for transporting the items 10 placed on a rotating belt.
  • the pallet 20, cart 30, and belt conveyor 40 are positioned within a distance range where the item moving work can be performed by the robot arm 110 of the item moving device 100.
  • the article moving device 100 is installed in a fixed state on the loading platform 50.
  • the loading platform 50 has two side sections 52 and one top section 54, and a space is formed between the top section 54 and the floor surface or the like into which the automated guided vehicle (AGV) 60 (described later) can enter.
  • AGV automated guided vehicle
  • the pallet 20 has two side sections 22 and one top section 24, and a space is formed between the top section 24 and the floor surface or the like, allowing the automatic guided vehicle (AGV) 60, which will be described later, to enter.
  • AGV automatic guided vehicle
  • the cart 30 comprises a loading section 32 on which the items 10 are loaded, a number of casters 34 installed on the underside of the loading section 32, and a protective fence 36 provided on the loading section 32.
  • the cart 30 is provided with casters 34, allowing an operator to push the cart 30 around the warehouse.
  • the protective fence 36 is provided on three sides of the loading section 32 and prevents the items 10 loaded on the loading section 32 from collapsing and falling off the cart 30, etc.
  • the platform 50 carrying the item moving device 100 and the pallet 20 can be moved within the warehouse by an automatic transport device 60 (hereinafter referred to as an "AGV (Automatic Guided Vehicle)").
  • the AGV 60 can move autonomously by sensing the surrounding environment, or can move in response to remote control by a user.
  • the AGV 60 is equipped with a hydraulic system that moves its upper surface portion 62 in the vertical direction.
  • the AGV 60 can lower the upper surface portion 62 to a high
  • the hydraulic system is activated to raise the upper surface 62, lifting the pallet 20 or platform 50 to a height clear of the floor, and transporting the pallet 20 or platform 50 to another position in the warehouse.
  • the AGV 60 After moving to the transport position, the AGV 60 lowers the upper surface 62, bringing the pallet 20 or platform 50 into contact with the floor, completing the transport, and exiting from underneath.
  • the AGV 60 can also tow the cart 30 to move it around the warehouse.
  • Fig. 3 is a side view that shows a schematic configuration of the article moving device 100 according to this embodiment
  • Fig. 4 is a block diagram that shows the configuration of the article moving device 100 according to this embodiment.
  • the item moving device 100 includes an arm section 110, a base section 120 that supports and fixes the arm section 110, a holding section 130 provided at the tip of the arm section 110, a camera 140 provided on the holding section 130, and a control device 150 that controls the operation of the arm section 110 and the holding section 130, as well as the overall control processing of the item moving device 100, including each process described below.
  • the item moving device 100 can function as a cargo handling robot that holds an item 10 and moves it to another position (hereinafter also referred to as a "pick-and-place (PnP) operation").
  • PnP pick-and-place
  • the item moving device 100 transfers the item 10, such as packaged goods loaded on a pallet 20 in a logistics warehouse, onto a cart 30, or moves it onto a belt conveyor 40.
  • the arm unit 110 has a plurality of link members 112, 113, and 116.
  • the plurality of link members 112, 113, and 116 constitute a multi-joint robot arm.
  • the multi-joint robot arm may be a six-axis arm having degrees of freedom in linear directions along the X-axis, Y-axis, and Z-axis, and degrees of freedom in directions around the X-axis, Y-axis, and Z-axis.
  • the multi-joint robot arm may also have any other mechanism, such as a Cartesian coordinate system robot arm, a polar coordinate system robot arm, a cylindrical coordinate system robot arm, or a SCARA type robot arm.
  • the base end of the arm unit 110 is placed on the mounting table 50 described above in a fixed state.
  • the arm unit 110 can move the holding unit 130 within the distance range that the arm unit 110 can reach by moving the respective link members 112, 113, and 116.
  • the holding unit 130 is, for example, a suction gripper, and is provided with a number of suction cups 132 for holding an object, as shown in FIG. 3.
  • the holding unit 130 is provided with suction means (not shown), such as a vacuum pump, and is able to hold the surface of an object with the multiple suction cups 132 by sucking in air from suction ports opening on the inside of each suction cup 132.
  • the camera 140 is installed on the side surface of the holding unit 130 adjacent to the surface on which the suction cup 132 is provided.
  • the camera 140 is used to acquire environmental information about the surroundings of the item moving device 100 in the area below the tip of the arm unit 110.
  • the camera 140 may have, for example, an image sensor that generates an image (an RGB image in one example) in which pixels are arranged two-dimensionally, and a depth sensor that is a distance detection device that generates distance data.
  • the depth sensor is not limited to a specific type as long as it can acquire distance data to an object. For example, a stereo lens type or a LiDAR (Light Detection and Ranging) type can be used.
  • the depth sensor may generate a depth image, for example.
  • the camera 140 may acquire distance data using an ultrasonic element, for example.
  • control device 150 has a processor 155, a storage unit 160, an operation unit 172, a display unit 174, and an input/output unit 176.
  • the control device 150 is depicted as a single element, but the control device 150 does not necessarily have to be a single physical element, and may be composed of multiple physically separated elements.
  • the operation unit 172 is a device for receiving input from a user.
  • the operation unit 172 may be configured with devices for inputting to a computer, such as a keyboard, a mouse, a touch panel, or a remote controller called a tracker or VR controller that is capable of tracking position and posture using infrared rays or the like and has a trigger button or the like.
  • the operation unit 172 may also have a voice input device such as a microphone.
  • the operation unit 172 may also have a gesture input device that uses image recognition to identify the user's movements.
  • the display unit 174 is a display device that displays the display screen generated by the processor 155, and may be, for example, a flat display device such as a liquid crystal display or an organic EL display device, or a head-mounted display (HMD).
  • the input/output unit 176 is connected to the arm unit 110, the holding unit 130, and the camera 140 of the item moving device 100 by wired or wireless communication, and outputs control signals and inputs acquired information between these components.
  • the storage unit 160 includes a temporary or non-temporary storage medium such as a ROM (Read Only Memory), a RAM (Random Access Memory), a HDD (Hard Disk Drive), or an SSD (Solid State Drive).
  • the storage unit 160 stores a computer program executed by the processor 155.
  • the computer program stored in the storage unit 160 includes instructions for implementing a method of controlling the item moving device 100 by the processor 155, which will be described later with reference to FIG. 7 etc.
  • the storage unit 160 further at least temporarily stores information received from the camera 140 and various data (including intermediately generated data) generated by the processing operations of the processor 155.
  • the processor 155 is composed of, for example, one or more CPUs (Central Processing Units). By executing a computer program stored in the memory unit 160, the processor 155 mainly controls processing based on input made by the user via the operation unit 172, input/output control of the input/output unit 176, display control of the display unit 174, etc. In particular, the processor 155 generates one or more control signals for operating the drive units (not shown) of the arm unit 110 and the holding unit 130 and the camera 140 based on user input input by the user via the operation unit 172.
  • CPUs Central Processing Units
  • the processor 155 is configured to generate a UI (user interface) screen to be presented to the user and display it on the display unit 174.
  • the UI screen (not shown) includes, for example, a selection button display that provides the user with multiple options.
  • the processor 155 generates an image or video of a virtual world (simulation space) based on a real-world image or video of the surrounding environment of the item moving device 100 acquired by the camera 140 of the item moving device 100, and displays it on the display unit 174.
  • the processor 155 When generating an image or video of the virtual world based on an image or video of the real world, the processor 155 establishes a correlation between the real world and the virtual world, for example, by associating a coordinate system of the real world with a coordinate system of the virtual world. Furthermore, an image or video of the real world and an image or video of the virtual world (simulation space) may be displayed on the display unit 174 simultaneously.
  • the images or videos of the virtual world (simulation space) generated based on real-world images or videos of the surrounding environment of the item moving device 100 also include objects (items 10, pallets 20, carts 30, conveyor belt 40, etc.) that exist in the surrounding environment of the item moving device 100.
  • the processor 155 of the control device 150 is configured to generate a model corresponding to an object contained in an image or video of the virtual world (simulation space), and to perform processing to add attribute information of the object to the generated model.
  • Figures 5 and 6 are diagrams explaining the processing operation of the processor 155 of the control device 150 to assign annotation information to an object via a model corresponding to the object.
  • the processor 155 of the control device 150 When the processor 155 of the control device 150 receives a predetermined operation (such as pressing a predetermined button on the controller) performed by the user on the operation unit 172, it displays a UI screen on the display unit 174 for the user to select the type of object for which a model is to be generated.
  • FIG. 5 shows an example of a UI screen for selecting an object type displayed on the display unit 174 by the processor 155.
  • FIG. 5 shows a state in which "Box (packaged product)", “Conveyor (conveyor belt)”, “Pallet (pallet)”, and “Cart (cart)” are presented as selectable object types, and the user is pointing to "Conveyor (conveyor belt)" on the operation unit 172 to select it.
  • the object type selection operation on the UI screen can be performed, for example, by pointing the tip of an instruction line that moves in conjunction with the operation unit 172 to the position of the option for the object type that is to be selected, and pressing a predetermined button on the operation unit 172.
  • the processor 155 subsequently acquires the attributes of the selected object type (in the above example, the attribute is "Conveyor (conveyor belt)" as attribute information to be associated with the model to be generated, as described below.
  • the processor 155 After receiving the selection of the object type from the operation unit 172, the processor 155 then displays a virtual world (simulation space) of the surrounding environment of the item moving device 100 on the display unit 174. At this time, the display unit 174 displays a model for specifying the contour shape, posture, and dimensions of the selected object (in the above example, a conveyor (belt conveyor)) in the same virtual world (simulation space). As an example, the model has a rectangular parallelepiped shape.
  • the processor 155 changes the position, dimensions, and orientation of the model in the simulation space displayed on the display unit 174 in response to operation input from the operation unit 172 by the user.
  • Operation input from the operation unit 172 can be performed, for example, by pointing at the model Mdl and pressing a specified button on the controller to drag the model Mdl and move the model Mdl to the desired position and orientation, or by pointing at the model Mdl and pressing a specified button on the controller to move any edge or vertex of the model Mdl in a manner similar to changing the dimensions of a so-called bounding box.
  • FIG. 6 shows how the position, dimensions, and orientation of the model are changed in the simulation space displayed on the display unit 174 in response to operation input from the operation unit 172 by the user.
  • FIG. 6(a) is a diagram showing a scene in which the position, dimensions, and orientation of the model are changed in response to operation input from the operation unit 172.
  • FIG. 6(a) shows a state in which the model Mdl displayed in the simulation space is roughly aligned with the position of a scanned image of the conveyor displayed in the same simulation space. From this state, when the position, orientation, and dimensions of each side of the model Mdl are further adjusted in response to operation input from the operation unit 172 by the user, the contour of the model Mdl roughly matches the external contour of the conveyor in the scanned image, as shown in FIG. 6(b). This completes the generation of the model Mdl.
  • the processor 155 When the processor 155 receives a model generation input operation (e.g., pressing a specific button on the controller) from the user on the operation unit 172, it obtains the position, orientation, and dimensions of each side of the model Mdl specified at that time as the position, orientation, and dimensions of each side of the corresponding object (in this example, a conveyor belt).
  • a model generation input operation e.g., pressing a specific button on the controller
  • the data of the model Mdl generated in this way is stored in the memory unit 160 of the control device 150.
  • the processor 155 assigns annotation information including attribute information and information regarding position, orientation, and dimensions (length, width, and height dimensions) to the corresponding object via the model Mdl in the coordinate system of the virtual world (simulation space). Based on the annotation information assigned in this way, the processor 155 can calculate and recognize the position, orientation, and dimensions of the corresponding object (conveyor belt) in the coordinate system of the real world.
  • the processor 155 is also configured to detect objects present in an image captured by the camera 140 using any image recognition technology and/or a trained model.
  • the processor 155 can detect the contour shape of the upper surface of each of the items 10 exposed above based on an image captured by the camera 140 from above the multiple items 10 loaded on the pallet 20.
  • the trained model can be generated by performing machine learning with a neural network composed of multiple layers including neurons in each layer using image data of various images captured by the camera 140 from above the multiple items 10 loaded on the pallet 20 as described above.
  • a neural network such as a convolutional neural network (CNN) having 20 or more layers may be used. Machine learning using such a deep neural network is called deep learning.
  • CNN convolutional neural network
  • the trained model described above can be generated using a "Visual Transformer,” which applies the Transformer, a type of deep neural network based mainly on a self-attention mechanism, to the field of computer vision.
  • the trained model generated in this way is stored in the storage unit 160.
  • Fig. 7 is a flowchart showing the article moving operation by the article moving device 100.
  • step S11 the processor 155 of the item moving device 100 acquires information about the surrounding environment of the item moving device 100 using the camera 140 installed in the holding unit 130.
  • the item moving device 100 is moved by the AGV 60, and its location within the warehouse is changed as appropriate.
  • pallets 20 loaded with items 10, carts 30 to which the items 10 are to be moved, belt conveyor 40, etc. are arranged.
  • the processor 155 causes the item moving device 100 to execute a first operation and a second operation as an operation for acquiring ambient environment information of the item moving device 100 in step S11.
  • Figure 8 is a diagram for explaining the first operation and the second operation in step S11 shown in Figure 7.
  • the processor 155 extends each of the link members 112, 113, and 116 of the arm unit 110 of the item moving device 100 in a straight line so that they jut out from the base unit 120 as shown in FIG. 8(a), and operates the actuators (not shown) of each joint so that the imaging direction of the camera 140 installed in the holding unit 130 faces downward and forward.
  • the processor 155 then rotates the entire arm unit 110 a predetermined rotation angle (maximum one revolution) relative to the base unit 120 while capturing an image with the camera 140.
  • the camera 140 which is positioned at a higher position, obtains surrounding environment information in a first range that includes an area relatively far from the item moving device 100.
  • the processor 155 operates the actuators (not shown) of each joint so that the link members 112, 113, and 116 of the arm unit 110 are bent as shown in FIG. 8(b) and the imaging direction of the camera 140 installed in the holding unit 130 faces downward and forward.
  • the processor 155 then rotates the entire arm unit 110 a predetermined rotation angle (maximum one revolution) relative to the base unit 120 while capturing an image with the camera 140.
  • the camera 140 which is positioned at a lower position, obtains surrounding environment information in a second range that includes an area relatively close to the item moving device 100.
  • step S11 which includes the above-mentioned first and second operations, ambient environment information can be obtained in a relatively wide first range around the item moving device 100, and ambient environment information can be obtained with higher resolution for objects present in a second range that is relatively close to the item moving device 100.
  • the operation of acquiring the surrounding environment information of the item moving device 100 includes the first and second operations, but it is also possible to execute only one of the first and second operations (i.e., only one acquisition operation) as the operation of acquiring the surrounding environment information.
  • the operation of acquiring the surrounding environment information may be executed automatically based on the control of the processor 155, or may be executed by manually operating the arm unit 110 and the holding unit 130 by a user input operation via the operation unit 172. In the latter case, by concentrating the camera 140 around the object about which the surrounding environment information is to be acquired and acquiring information, it is possible to acquire information necessary for the operation of the item moving device 100 even with a smaller amount of data than when information about the entire surroundings is automatically acquired.
  • step S12 the processor 155 generates a virtual world (simulation space) that reproduces the surrounding environment of the item moving device 100 based on the surrounding environment information acquired in step S11, and displays it on the display unit 174.
  • the virtual world displays each object in the real world that exists at least within a range accessible to the item moving device 100.
  • the objects may be represented by two-dimensional or three-dimensional images of the real-world objects obtained by the camera 140, depth maps, point clouds, or the like. Alternatively, they may be represented by computer graphics that represent the objects.
  • step S13 the processor 155 assigns annotation information to the object displayed in the virtual world (simulation space).
  • the processor 155 assigns annotation information to the object based on selection information (see FIG. 5) regarding the type of object to which annotation information is to be assigned, which is input by the user operating the operation unit 172 as described above with reference to FIGS. 5 and 6, and information regarding the position, orientation, and dimensions of the model determined for the object in the virtual world.
  • the annotation information is added to the objects in step S13 for each of the various objects (items 10, pallets 20, carts 30, conveyor belts 40, etc.) displayed in the virtual world.
  • annotation information may be added to each item 10 individually, or when multiple items 10 loaded on the pallet 20 are all the same size (length, width, and height), annotation information (information regarding position, orientation, and dimensions) may be added to at least one item 10 on the pallet 20.
  • the processor 155 adds the same annotation information as the annotation information added to at least one item 10 to other items 10 detected using any image recognition technology and/or trained model.
  • the processor 155 assigns annotation information including attribute information and information regarding position, orientation, and size to various objects via the model Mdl in the coordinate system of the virtual world.
  • the processor 155 recognizes that the objects in the virtual space displayed on the display unit 174 are not simply objects occupying a certain volumetric space, but represent objects specified by the assigned annotation information, and the processor 155 is able to calculate and recognize the positions, orientations, and sizes of various corresponding objects in the coordinate system of the real world based on the annotation information assigned in this manner.
  • the processor 155 recognizes, based on the annotation information, what types of objects exist around the item moving device 100 in the real world, and in what positions, orientations, and sizes.
  • the annotation information assigned to each object is stored at least temporarily in the storage unit 160.
  • step S14 the processor 155 determines whether each object recognized based on the annotation information is present within a range accessible to the holding unit 130 by the arm unit 110 of the item moving device 100.
  • Information regarding the range accessible to the holding unit 130 by the arm unit 110 of the item moving device 100 is pre-stored in the storage unit 160 as known information.
  • Information regarding the accessible range may be planar information (e.g., circular or sectorial) centered on the item moving device 100, or may be three-dimensional information (e.g., cylindrical, sectorial, or hemispherical) in which height information is added to such planar information.
  • Such information regarding the accessible range is determined based on the dimensions and movable range of the arm unit 110 and holding unit 130 of the item moving device 100.
  • the processor 155 determines for each object whether 1) the entire object is within the accessible range, or 2) at least a portion of the object is outside the accessible range, based on information about the accessible range.
  • the determination results are stored at least temporarily in the storage unit 160 in association with the annotation information assigned to each object.
  • step S15 the processor 155 receives a first input that specifies the object (item 10) to be moved by the item moving device 100, and a second input that specifies the position or area on the object to which the item 10 is to be moved.
  • the first input for specifying an object (item 10) to be moved by the item moving device 100 can be executed, for example, by a user operating the operation unit 172 performing an input operation for specifying a model Mdl corresponding to each item 10 displayed in the virtual space.
  • the model Mdl corresponding to each item 10 can be specified, for example, by pointing to each model Mdl individually and pressing a specified button on the controller to confirm. Alternatively, it can also be specified by an operation for surrounding multiple models Mdl corresponding to multiple items 10 with a three-dimensional bounding box.
  • FIG. 9 is a diagram showing an example of a state in which a position or area on an object to which an item is to be moved is specified.
  • FIG. 9 shows a state in which the center of the conveyor belt 40 is specified as the position or area on the object to which the item 10 is to be moved.
  • a cube-shaped marker P indicating the destination of the item is placed on the center of the model Mdl_1 representing the conveyor belt 40.
  • the marker P is placed at an angle that follows the angle of the model Mdl_1, which means that the item 10 is specified to be transported in the same orientation as the marker P to the position indicated by the marker P on the conveyor belt 40 in the real world.
  • FIG. 9 also shows a model Mdl_2 representing a pallet 20.
  • the second input specifying the destination position or area to which the item 10 is to be moved can be executed, for example, by a user operating the operation unit 172 performing an input operation to specify a model Mdl corresponding to a destination object displayed in the virtual space. More specifically, the destination position or area to which the item 10 is to be moved can be specified by specifying a position on the model Mdl corresponding to the belt conveyor 40 when each item 10 is to be moved to a specific position on the belt conveyor 40, or by specifying the corresponding area on the model Mdl with a bounding box when each item 10 is to be moved to an area on the pallet 20 or cart 30.
  • step S16 the processor 155 determines, based on the first and second inputs, whether the specified object to be moved (item 10) and each object related to the specified position or area on the object to be moved (pallet 20, cart 30, conveyor belt 40, etc.) are within the accessible range. This determination process can be performed based on the above determination results associated with the annotation information assigned to these specified objects. If the processor 155 determines that the entirety of each of these specified objects is within the accessible range (Y), it proceeds to the processing of step S17 described below.
  • the processor 155 displays a message on the display unit 174 for the object that it has determined is not entirely within the accessible range, such as "The movement operation cannot be performed because XX (object) is not within the accessible range. Please move XX (object) to a position closer to the robot and run the process from the beginning.”, and ends the process.
  • step S17 the processor 155 performs motion planning for operating the item moving device 100 (particularly the arm unit 110 and the holding unit 130) to move the specified item 10 to the specified position or area based on the user input received in step S15, generates an operation command for operating the item moving device 100, and operates the item moving device 100 based on the operation command to move the specified item 10 to the position or area on the specified object.
  • the processor 155 causes the item moving device 100 to repeatedly perform the pick-and-place operation of the item 10 until all of the specified items 10 have been moved to the specified positions or areas.
  • the operation of picking up the items 10 with the holding unit 130 is controlled by the processor 155, for example, to detect the contour shape of the upper surface of each of the items 10 exposed above based on an image of the multiple loaded items 10 captured from above by the camera 140, and to appropriately align the suction cups 132 of the holding unit 130 with the upper surface of each item 10 based on the detected contour shape of each item 10, and to suction and hold each item 10 with the suction cups 132.
  • the processor 155 may execute an operation of acquiring the height dimension of the picked up item 10.
  • an additional camera (not shown) having the same function as the camera 140 is installed on the base unit 120 of the item moving device 100, and the processor 155 executes an operation of moving the item 10 once in front of the additional camera after the holding unit 130 picks up the item 10, and capturing an image of the item 10 held by the holding unit 130 from the side with the additional camera.
  • the processor 155 then executes a process of applying any image recognition technology to the captured image to determine the height dimension of the item 10, thereby acquiring the height dimension of the item 10.
  • the pick-and-place operation of moving the items 10 to a designated position or area is controlled by the processor 155, for example, to repeatedly move each item 10 to a designated position (e.g., the center part of the belt conveyor 40 in the above example) and repeatedly grip and release the items 10 with the suction cups 132, or to repeatedly load the items 10 onto a designated area of the pallet 20 or cart 30.
  • the latter operation of sequentially loading items 10 onto a designated area is controlled by the processor 155 to determine the loading position and orientation of the items 10 based on the contour shape of the top surface of the items 10 to be moved so that the designated area is filled by the contour shape, and to sequentially load the items 10 into the designated area according to the loading position and orientation.
  • each item 10 is provided with annotation information, and the annotation information includes information on the height dimension of the item 10 in particular. Furthermore, information on the height dimension of each item 10 can also be obtained by an additional camera installed in the item moving device 100. Based on such item height dimension information, when the item 10 held by the holding unit 130 is moved to a designated position, the operation of the arm unit 110 and the holding unit 130 is controlled by the processor 155 so that a gap larger than the height dimension of the item 10 is maintained between the holding unit 130 and the designated position (the upper surface of the pallet 20, cart 30, belt conveyor 40, etc.). This makes it possible to prevent the item 10 held by the holding unit 130 from being pressed against the pallet 20, cart 30, belt conveyor 40, etc. at the designated position, which would damage the item 10 or the pallet 20, cart 30, belt conveyor 40, etc. at the designated position.
  • the processor 155 takes into account the size (length, width, and height dimensions) of each item 10, performs motion planning to move those items 10 so that some or all of the space on the destination pallet 20 or cart 30 is filled with the volume of the multiple items 10 to be moved, and causes the item moving device 100 to execute the movement operation of the multiple items 10.
  • the processor 155 recognizes that each item 10 at the source has the same size based on the annotation information (particularly information regarding position, orientation, and size) assigned to at least one item 10 as described above, performs motion planning to move the items 10 so that some or all of the space on the destination pallet 20 or cart 30 is filled with the volume of the multiple items 10 to be moved, and causes the item moving device 100 to execute the movement operation of the multiple items 10.
  • annotation information including attribute information and information regarding the position, attitude, and dimensions of objects (item 10 to be moved, pallet 20, cart 30, belt conveyor 40, etc. to be moved) that exist around the item moving device 100 in the real world can be added, thereby allowing the item moving device 100 to recognize the attributes, position, attitude, and dimensions of those objects that exist in the real world.
  • the item moving device 100 can easily and quickly recognize the surrounding environment of the item moving device 100 after the movement.
  • the item moving device 100 of this embodiment is configured to, after receiving a user input specifying the item 10 to be moved and a user input specifying the destination position or area, determine whether an object related to the movement operation of the specified item 10 is present within an accessible range of the item moving device 100, and not execute the item moving operation if any of the objects are not present within the accessible range. This makes it possible to prevent incidents (such as damage to the item 10 or other objects due to the item 10 falling or collapsing, etc.) that may occur when an item moving operation is executed when any of the objects are not present within the accessible range.
  • the object determined not to be within the accessible range is moved by the AGV 60 to a position closer to the item moving device 100.
  • the object may be moved by the AGV 60 by a user manually operating the AGV 60 by remote control, or the movement of the AGV 60 may be controlled by the processor 155 of the item moving device 100 while the item moving device 100 and the AGV 60 are communicating with each other via the input/output unit 176 of the item moving device 100.
  • FIG. 10 is a diagram showing a first modified example of the article moving device in this embodiment.
  • the article moving device 100 of this modified example has an instruction display unit 180 that projects an instruction display indicating a position to which an object is to be moved, provided on the holding unit 130.
  • the instruction display unit 180 can be configured by an optical projector that projects an image of an instruction display onto a floor surface or the like on which the article moving device 100 is placed, a laser emitting device that can radiate a laser light while scanning it to draw a desired figure or character, or the like.
  • the instruction display presented by the instruction display unit 180 may be, for example, a figure of any shape such as a cross, a circle, or an arrow. In the example shown in FIG. 10, the instruction display unit 180 is configured to project a cross-shaped instruction display.
  • an instruction display is projected by the instruction display unit 180 at a location where an object such as a pallet, cart, or belt conveyor should be placed.
  • the object can be moved by the AGV 60 by a user manually operating the AGV 60 by remote control, and moving the object so as to align the AGV 60 with the instruction display projected on the floor surface or the like.
  • the AGV 60 is equipped with an imaging camera and an image recognition device, the AGV 60 can recognize the instruction display projected on the floor surface or the like, and autonomously travel so as to align itself with the position of the instruction display.
  • an operator may visually check the instruction display projected by the instruction display unit 180, and move the object himself so as to align the object with the instruction display.
  • FIG. 10B shows an example of the projection operation of the instruction display by the article moving device 100 of this modified example.
  • the instruction display unit 180 projects an instruction display mk1 at a location where an object (e.g., a pallet) is to be placed, and the pallet is placed at the position of the instruction display using the AGV 60 or the like.
  • the robot arm 110 is extended and/or rotated, and the instruction display unit 180 projects the next instruction display mk2 at a location where the next object (e.g., a cart) is to be placed, and the cart is placed at the position of the instruction display using the AGV 60 or the like.
  • the article moving device 100 of this modified example repeatedly executes the operation of displaying an instruction display by the instruction display unit 180 at a location where an object is to be placed.
  • the position where the instruction display is projected by the instruction display unit 180 can be specified by the user using the input/output unit 176 of the control device 150, for example, in a virtual world (simulation space) in which the control device 150 reproduces the surrounding environment of the article moving device 100 in the real world.
  • the processor 155 executes the process described with reference to FIG. 7 again, starting from the initial step S11.
  • surrounding environment information including the moved object (pallet 20, cart 30, conveyor belt 40, etc.) is acquired again (step S11)
  • the moved object is reproduced in the virtual world (step S12)
  • annotation information is added to the moved object (step S13)
  • a determination is made as to whether at least the moved object is within the accessible range (step S14), and a designation input is made for at least the moved object (step S15).
  • step S16 if it is determined that the specified object is within the accessible range, the item 10 is moved (step S17). If it is again determined that the specified object is not within the accessible range of the item moving device 100, the object is moved again by the AGV 60 and the above steps S11 to S16 are repeated until it is determined that the specified object is within the accessible range.
  • an instruction display unit 200 is provided on the base section 120 that fixedly supports the arm section 110 of the article moving device 100.
  • the base section 120 is fixed on the mounting table 50.
  • the indication display unit 200 has a peripheral fixing part 210 fixed to the base part 120 so as to surround the periphery of the columnar base part 120, a peripheral rotating part 215 supported by the peripheral fixing part 210 so as to surround the outer periphery of the peripheral fixing part 210 and be rotatable along the circumferential direction of the peripheral fixing part 210, and a rotary drive source 220 that rotates the peripheral rotating part 215.
  • peripheral rotating part 215 is supported by the peripheral fixed part 210 so that it can rotate in the circumferential direction of the peripheral fixed part 210 but cannot move in the axial direction of the peripheral fixed part 210 (z direction shown).
  • a gear is formed on the outer peripheral surface of the peripheral rotating part 215.
  • the rotational drive source 220 has a gear with a toothed wheel formed on its outer circumferential surface and a motor that rotates it.
  • the toothed wheel of the gear of the rotational drive source 220 meshes with a toothed wheel formed on the outer circumferential surface of the peripheral rotation part 215, and the peripheral rotation part 215 can be rotated by driving the gear of the rotational drive source 220 to rotate.
  • the rotational direction of the peripheral rotation part 215 can be switched by switching the rotational direction of the motor of the rotational drive source 220.
  • the indication display unit 200 further includes a first drive unit 230 provided on the circumferential rotating portion 215, a rod portion 240 whose base end is supported by the first drive unit 230, a second drive unit 250 provided on the tip end of the rod portion 240, and an indication display unit 260 supported by the second drive unit 250.
  • the first drive unit 230 is, for example, a servo motor, and can rotate the rod unit 240 supported by the first drive unit 230 around its base side as the center of rotation.
  • the rod unit 240 can be rotated in a direction in which the tip of the rod unit 240 is lifted upward from a state in which the tip of the rod unit 240 faces downward as shown in FIG. 11.
  • the second drive unit 250 is also, for example, a servo motor, and can rotate the instruction display unit 260.
  • the instruction display unit 260 can also be configured by an optical projector that projects an image of an instruction display onto the floor surface on which the item moving device 100 is placed, or a laser emission device that can emit laser light while scanning it to draw desired figures or characters.
  • the instruction display unit 200 of this modified example is capable of changing the position and orientation of the instruction display unit 260 relative to the base unit 120 by three degrees of freedom: rotation of the circumferential rotation unit 215 (first degree of freedom), rotation by the first drive unit 230 (second degree of freedom), and rotation by the second drive unit 250 (third degree of freedom). Therefore, according to the instruction display unit 200 of this modified example, by driving each drive unit to appropriately change the position and orientation of the instruction display unit 260, it is possible to project an instruction display from the instruction display unit 260 to a location where an object such as a pallet, cart, or belt conveyor should be placed around the item moving device 100.
  • the instruction display unit 200 of this modified example is independent of the arm unit 110 of the item moving device 100, unlike the configuration in which the instruction display unit 180 is provided on the holding unit 130 at the tip of the arm unit 110 as in the first modified example, there is an advantage that the instruction display unit 260 can project an instruction display independent of the operation of the arm unit 110.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

One embodiment of the present invention provides an article moving device 100 comprising: an arm part 110 comprising a holding part 130; an acquisition part 140 for acquiring surrounding environment information; and a processor 155 serving as a control unit. The processor 155 is configured to: cause the acquisition part 140 to acquire surrounding environment information; generate a virtual space including an object in the actual world on the basis of the surrounding environment information; impart, to the object in a virtual world, annotation information including attribute information of the object and information pertaining to the position, the orientation, and the dimensions of the object; receive, with respect to the object present in the virtual world, a first input specifying an article to be moved, and a second input specifying the position or region of the destination of the article; and cause the holding part and the arm part to perform an operation for moving the specified article to the specified position or region in the real world on the basis of the first and second inputs.

Description

物品移動装置及びその制御方法Article moving device and control method thereof
 本発明は、物品移動装置及びその制御方法に関する。 The present invention relates to an object moving device and a control method thereof.
 従来から、物品運送の基地局であるロジスティクス倉庫等において、ロボットアームを備えた物品移動装置を用いて、パレット上に積載された梱包済み商品等の物品をカート上に積み替えたり、ベルトコンベアの上に順次移動させたりする作業が実施されている。梱包箱に梱包されたの物品、パレット、カート、ベルトコンベア等(以下、これらを総称して「オブジェクト」とも称する。)に対して物品移動装置が適切な作業を行うためには、物品移動装置がそれらのオブジェクトの属性並びに位置及び姿勢等を認識することが好ましい。  Traditionally, in logistics warehouses and the like, which are base stations for transporting goods, an item moving device equipped with a robot arm is used to transfer items such as packaged goods loaded on a pallet onto a cart or to move them sequentially onto a conveyer belt. In order for the item moving device to perform appropriate operations on items packed in packaging boxes, pallets, carts, conveyer belts, etc. (hereinafter collectively referred to as "objects"), it is preferable for the item moving device to recognize the attributes, position, orientation, etc. of these objects.
 特許文献1には、オブジェク卜に属性情報を付与する手法として、現実世界の環境を再現する仮想世界を生成することと、仮想世界において、属性情報を付与するオブジェク卜に対応するモデルを取得することと、モデルに対して、モデルの少なくとも1つの部分に関する情報を含む属性情報を付与することと、を含む手法が提案されている。 Patent Document 1 proposes a method for assigning attribute information to an object, which includes generating a virtual world that reproduces a real-world environment, obtaining a model in the virtual world that corresponds to the object to which attribute information is to be assigned, and assigning attribute information to the model, the attribute information including information about at least one part of the model.
 また、特許文献2には、オブジェクトの位置及び姿勢を特定する手法として、現実世界のオブジェクトの表示を含む仮想世界を生成することと、仮想世界において、オブジェクトに対応するモデルを表示することと、仮想世界において、モデルをオブジェクトに重ね合わせることと、オブジェクトとモデルとを比較してオブジェクトの位置及び姿勢を特定することと、を含む手法が提案されている。 Patent Document 2 also proposes a method for identifying the position and orientation of an object, which includes generating a virtual world that includes a display of the object in the real world, displaying a model that corresponds to the object in the virtual world, overlaying the model on the object in the virtual world, and comparing the object and the model to identify the position and orientation of the object.
国際公開第2020/218533号International Publication No. 2020/218533 国際公開第2020/235539号International Publication No. 2020/235539
 しかしながら、特許文献1あるいは特許文献2は、あるオブジェクトに対して、属性情報と位置及び姿勢に関する情報とを一連の操作によってまとめて付与する手段までは提供していない。 However, neither Patent Document 1 nor Patent Document 2 provides a means for assigning attribute information and information regarding the position and orientation of an object all at once through a series of operations.
 また、特許文献1及び特許文献2に開示された手法により、ロジスティクス倉庫等における物品移動装置の周囲に存在する物品、パレット、カート、ベルトコンベア等のオブジェクトの属性、位置及び姿勢に関する情報を取得することが可能であるとしても、物品移動装置の周囲に存在するそれらのオブジェクトが、物品移動装置による物品移動作業(例えば、パレット上の物品をカートに積み替える作業)を実行できるように配置されているかどうかを判断する手段までは特許文献1ないし特許文献2に提供されていない。 In addition, while the techniques disclosed in Patent Documents 1 and 2 make it possible to obtain information about the attributes, positions, and attitudes of objects such as items, pallets, carts, and conveyor belts that exist around an item moving device in a logistics warehouse or the like, Patent Documents 1 and 2 do not provide any means for determining whether those objects that exist around the item moving device are positioned so that the item moving device can perform an item moving operation (for example, the operation of transferring items from a pallet to a cart).
 ロジスティクス倉庫等における作業現場では、物品移動装置及びその周囲に配置されるオブジェクトの配置位置が固定ではなく、移動作業に応じてそれらの配置位置が適宜変更されることがある。そのように物品移動装置及びその周囲の各種オブジェクトの配置を変更した場合、変更後のそれらオブジェクトの位置が物品移動装置による物品移動作業を実行可能な位置に配置されていないと、物品移動装置による物品移動作業を行うことができない。 In work sites such as logistics warehouses, the positions of the item moving device and the objects placed around it are not fixed, but may be changed as appropriate depending on the moving work. When the positions of the item moving device and the various objects around it are changed in this way, the item moving work cannot be performed by the item moving device unless the changed positions of those objects are located in positions where the item moving work can be performed by the item moving device.
 本開示の一態様における目的は、物品移動装置の周囲に配置されるオブジェクトの属性並びに位置・姿勢及び寸法に関するアノテーション情報をオブジェクトに対して容易に付与することを可能にする手段を提供することにある。また、本開示の他の態様における目的は、オブジェクトの配置位置が物品移動装置による物品移動作業を実行可能な配置位置であるかどうかを判断することを可能にする手段を提供することにある。 The object of one aspect of the present disclosure is to provide a means for easily assigning annotation information regarding the attributes, position, orientation, and dimensions of objects placed around an item moving device to the objects. The object of another aspect of the present disclosure is to provide a means for determining whether an object is located at a position where an item moving operation can be performed by the item moving device.
 本開示の一態様によれば、物品を移動させる物品移動装置であって、物品を保持する保持部を備えたアーム部と、物品移動装置の周囲環境情報を取得する取得部と、保持部、アーム部及び取得部の動作を制御する制御部と、を備えた物品移動装置が提供される。制御部は、取得部に、現実世界における物品移動装置の周囲環境情報を取得させることと、周囲環境情報に基づいて、現実世界における物品移動装置の周囲に存在するオブジェクトを含む仮想空間を生成することと、仮想世界において、オブジェクトの属性情報と位置、姿勢及び寸法に関する情報とを含むアノテーション情報をオブジェクトに付与することと、仮想世界に存在するオブジェクトに対する、移動させる物品を指定する第1の入力と、物品の移動先の位置又は領域を指定する第2の入力を受け付けることと、第1及び第2の入力に基づいて、現実世界において、指定された物品を指定された位置又は領域に移動させる動作を保持部及びアーム部に実行させることと、を実行するように構成されている。 According to one aspect of the present disclosure, an object moving device for moving an object is provided, the object moving device including an arm unit having a holding unit for holding an object, an acquisition unit for acquiring ambient environment information of the object moving device, and a control unit for controlling the operation of the holding unit, the arm unit, and the acquisition unit. The control unit is configured to: cause the acquisition unit to acquire ambient environment information of the object moving device in the real world; generate a virtual space including objects present around the object moving device in the real world based on the ambient environment information; assign annotation information including attribute information of the object and information regarding its position, orientation, and size to the object in the virtual world; accept a first input for specifying an object to be moved and a second input for specifying a position or area to which the object is to be moved, for the object present in the virtual world; and cause the holding unit and the arm unit to perform an operation of moving the specified object to a specified position or area in the real world based on the first and second inputs.
 本開示の他の態様によれば、制御部は、第1及び第2の入力において指定されたオブジェクトが物品移動装置によるアクセス可能範囲内に存在するか否かを判定することをさらに実行するように構成されており、指定された物品を指定された位置又は領域に移動させる動作を保持部及びアーム部に実行させることは、指定されたオブジェクトがアクセス可能範囲内に存在している場合に実行される。 According to another aspect of the present disclosure, the control unit is further configured to determine whether the object specified in the first and second inputs is within an accessible range of the item moving device, and causes the holding unit and the arm unit to perform an operation to move the specified item to a specified position or area when the specified object is within the accessible range.
 本開示の他の特徴事項および利点は、例示的且つ非網羅的に与えられている以下の説明及び添付図面から理解することができる。 Other features and advantages of the present disclosure can be seen from the following description and the accompanying drawings, which are given by way of example and are non-exhaustive.
 本開示の一態様によれば、物品移動装置の周囲に配置されるオブジェクトの属性並びに位置・姿勢及び寸法に関するアノテーション情報をオブジェクトに対して容易に付与することを可能にする手段が提供される。また、本開示の他の態様によれば、オブジェクトの配置位置が物品移動装置による物品移動作業を実行可能な配置位置であるかどうかを判断することを可能にする手段が提供される。 According to one aspect of the present disclosure, there is provided a means for easily assigning annotation information regarding the attributes, position, orientation, and dimensions of objects placed around an item moving device to the objects. According to another aspect of the present disclosure, there is provided a means for determining whether the placement position of an object is a placement position where an item moving operation can be performed by the item moving device.
倉庫内における物品移動装置及びその周囲に配置されたオブジェクトを模式的に示す概略平面図である。1 is a schematic plan view showing an article moving device and objects arranged around it in a warehouse. FIG. 倉庫内における物品移動装置及びオブジェクトを示す概略正面図である。1 is a schematic front view showing an article moving device and an object in a warehouse. 本実施形態に係る物品移動装置の構成を模式的に示す側面図である。1 is a side view showing a schematic configuration of an article moving device according to an embodiment of the present invention. 本実施形態に係る物品移動装置の構成を示すブロック図である。1 is a block diagram showing a configuration of an article moving device according to an embodiment of the present invention. 表示部に表示されるオブジェクト種類選択のためのUI画面の一例を示す図である。11 is a diagram showing an example of a UI screen for selecting an object type displayed on a display unit. FIG. ユーザによる操作部からの操作入力に応じて、表示部に表示される仮想世界においてモデルの位置、姿勢及び寸法を変化させる様子を示す図である。11A and 11B are diagrams showing how the position, orientation, and dimensions of a model are changed in the virtual world displayed on the display unit in response to an operation input from a user via an operation unit. 物品移動装置による物品移動動作を示すフローチャートである。4 is a flowchart showing an article moving operation by the article moving device. 図7に示したステップS11における第1の動作及び第2の動作を説明する図である。8A to 8C are diagrams illustrating a first operation and a second operation in step S11 shown in FIG. 7. 物品を移動させる先の位置又は領域が指定された状態の一例を示す図である。FIG. 13 is a diagram showing an example of a state in which a position or area to which an item is to be moved has been specified. 本実施形態における物品移動装置の一変形例を示す図である。FIG. 13 is a diagram showing a modified example of the article moving device in the present embodiment. 本実施形態における物品移動装置の他の変形例を示す図である。13A and 13B are diagrams illustrating another modified example of the article moving device in the present embodiment.
 以下、本開示の一実施形態について図面を参照して説明する。
<ロジスティクス倉庫の配置構成>
Hereinafter, an embodiment of the present disclosure will be described with reference to the drawings.
<Logistics warehouse layout>
 最初に、ロジスティクス倉庫(以下、単に「倉庫」とも称する。)の配置構成について説明する。図1は、倉庫内における物品移動装置及びその周囲に配置されたオブジェクト(梱包箱に梱包された商品等の物品、パレット、カート、ベルトコンベア等)を模式的に示す概略平面図である。図2は、倉庫内における物品移動装置及びオブジェクトを示す概略正面図である。 First, the layout and configuration of a logistics warehouse (hereinafter also simply referred to as a "warehouse") will be described. Fig. 1 is a schematic plan view showing an item moving device in a warehouse and objects arranged around it (items such as products packed in packing boxes, pallets, carts, conveyor belts, etc.). Fig. 2 is a schematic front view showing the item moving device and objects in a warehouse.
 一例として図1及び図2示すように、倉庫内には、ロボットアーム110を備えた物品移動装置100と、梱包箱に梱包された商品等の物品10が載置されるパレット20と、作業者が物品10を搬送するためのカート30と、回転するベルト上に載せられた物品10を搬送するベルトコンベヤ40とが配置されている。パレット20、カート30、ベルトコンベヤ40は物品移動装置100のロボットアーム110による物品移動作業が可能な距離範囲内に配置される。 As an example, as shown in Figures 1 and 2, within a warehouse are arranged an item moving device 100 equipped with a robot arm 110, a pallet 20 on which items 10, such as goods packed in packing boxes, are placed, a cart 30 for a worker to transport the items 10, and a belt conveyor 40 for transporting the items 10 placed on a rotating belt. The pallet 20, cart 30, and belt conveyor 40 are positioned within a distance range where the item moving work can be performed by the robot arm 110 of the item moving device 100.
 物品移動装置100は、載置台50の上に固定された状態で設置されている。載置台50は、2つの側面部52と1つの上面部54とを備え、上面部54と床面等との間に後述する自動搬送装置(AGV)60が進入可能な空間を形成している。 The article moving device 100 is installed in a fixed state on the loading platform 50. The loading platform 50 has two side sections 52 and one top section 54, and a space is formed between the top section 54 and the floor surface or the like into which the automated guided vehicle (AGV) 60 (described later) can enter.
 パレット20は、2つの側面部22と1つの上面部24とを備え、上面部24と床面等との間に後述する自動搬送装置(AGV)60が進入可能な空間を形成している。 The pallet 20 has two side sections 22 and one top section 24, and a space is formed between the top section 24 and the floor surface or the like, allowing the automatic guided vehicle (AGV) 60, which will be described later, to enter.
 カート30は、物品10が積載される積載部32と、積載部32の下側に設置された複数のキャスター34と、積載部32の上に設けられた保護柵36とを備えている。カート30がキャスター34を備えることにより、作業者がカート30を押して倉庫内を移動させることが可能である。保護柵36は、積載部32の3つの側面に設けられ、積載部32の上に積載された物品10が荷崩れしてカート30から落下等することを防止する。 The cart 30 comprises a loading section 32 on which the items 10 are loaded, a number of casters 34 installed on the underside of the loading section 32, and a protective fence 36 provided on the loading section 32. The cart 30 is provided with casters 34, allowing an operator to push the cart 30 around the warehouse. The protective fence 36 is provided on three sides of the loading section 32 and prevents the items 10 loaded on the loading section 32 from collapsing and falling off the cart 30, etc.
 物品移動装置100を載せた載置台50とパレット20は、自動搬送装置60(以下、「AGV(AGV(Automatic Guided Vehicle)と称する。)によって倉庫内を移動させることが可能である。AGV60は、周囲環境をセンシングして自律的に移動することが可能であり、あるいは、ユーザによる遠隔操作に応じて移動することも可能である。AGV60にはその上面部分62を上下方向に移動させる油圧システムが設けられている。AGV60は、上面部分62を下げて高さを低くした状態でコの字形のパレット20又は載置台50の下に進入した後、油圧システムを作動させて上面部分62を上昇させてパレット20又は載置台50を床面から離れる高さに持ち上げ、パレット20又は載置台50を載せた状態で倉庫内の他の位置まで搬送する。AGV60は、搬送位置まで移動した後に上面部分62を下げてパレット20又は載置台50を床面に接地させて搬送を終了し、その下から退出する。また、AGV60はカート30を牽引して倉庫内を移動させることも可能である。 The platform 50 carrying the item moving device 100 and the pallet 20 can be moved within the warehouse by an automatic transport device 60 (hereinafter referred to as an "AGV (Automatic Guided Vehicle)"). The AGV 60 can move autonomously by sensing the surrounding environment, or can move in response to remote control by a user. The AGV 60 is equipped with a hydraulic system that moves its upper surface portion 62 in the vertical direction. The AGV 60 can lower the upper surface portion 62 to a high After entering under the U-shaped pallet 20 or platform 50 with its height lowered, the hydraulic system is activated to raise the upper surface 62, lifting the pallet 20 or platform 50 to a height clear of the floor, and transporting the pallet 20 or platform 50 to another position in the warehouse. After moving to the transport position, the AGV 60 lowers the upper surface 62, bringing the pallet 20 or platform 50 into contact with the floor, completing the transport, and exiting from underneath. The AGV 60 can also tow the cart 30 to move it around the warehouse.
<物品移動装置100の構成>
 次に、本実施形態に係る物品移動装置100について、図1~図4を参照して以下に説明する。図3は本実施形態に係る物品移動装置100の構成を模式的に示す側面図であり、図4は本実施形態に係る物品移動装置100の構成を示すブロック図である。
<Configuration of the article moving device 100>
Next, an article moving device 100 according to this embodiment will be described below with reference to Fig. 1 to Fig. 4. Fig. 3 is a side view that shows a schematic configuration of the article moving device 100 according to this embodiment, and Fig. 4 is a block diagram that shows the configuration of the article moving device 100 according to this embodiment.
 物品移動装置100は、アーム部110と、アーム部110を固定支持するベース部120と、アーム部110の先端に設けられた保持部130と、保持部130に設けられたカメラ140と、アーム部110及び保持部130の動作制御及び後述する各処理を含む物品移動装置100全体の制御処理を司る制御装置150とを備えている。物品移動装置100は、物品10を保持して他の位置に移動させる動作(以下、「ピックアンドプレース(PnP)動作とも称する。」)を行う荷役ロボットとして機能し得る。物品移動装置100は、例えば、ロジスティクス倉庫内のパレット20上に積載された梱包済み商品等の物品10をカート30上に積み替えたり、ベルトコンベア40上に移動させる。 The item moving device 100 includes an arm section 110, a base section 120 that supports and fixes the arm section 110, a holding section 130 provided at the tip of the arm section 110, a camera 140 provided on the holding section 130, and a control device 150 that controls the operation of the arm section 110 and the holding section 130, as well as the overall control processing of the item moving device 100, including each process described below. The item moving device 100 can function as a cargo handling robot that holds an item 10 and moves it to another position (hereinafter also referred to as a "pick-and-place (PnP) operation"). For example, the item moving device 100 transfers the item 10, such as packaged goods loaded on a pallet 20 in a logistics warehouse, onto a cart 30, or moves it onto a belt conveyor 40.
 アーム部110は、複数のリンク部材112,113,116を有している。複数のリンク部材112,113,116は、多関節ロボットアームを構成する。多関節ロボットアームは、一例として、X軸方向、Y軸方向、およびZ軸方向のそれぞれに沿う直線方向に自由度を持ち、かつ、X軸周り、Y軸周り、およびZ軸周りのそれぞれの方向に自由度を持つ6軸のアームであってもよい。多関節ロボットアームは、他にも、直交座標系ロボットアーム、極座標系ロボットアーム、円筒座標系ロボットアーム、スカラ型ロボットアームなど任意の機構を有してもよい。アーム部110の基端部は上述した載置台50に固定された状態で載置される。アーム部110は、それぞれのリンク部材112,113,116を動かすことにより、アーム部110がリーチ可能な距離範囲内において保持部130を移動させることができる。 The arm unit 110 has a plurality of link members 112, 113, and 116. The plurality of link members 112, 113, and 116 constitute a multi-joint robot arm. As an example, the multi-joint robot arm may be a six-axis arm having degrees of freedom in linear directions along the X-axis, Y-axis, and Z-axis, and degrees of freedom in directions around the X-axis, Y-axis, and Z-axis. The multi-joint robot arm may also have any other mechanism, such as a Cartesian coordinate system robot arm, a polar coordinate system robot arm, a cylindrical coordinate system robot arm, or a SCARA type robot arm. The base end of the arm unit 110 is placed on the mounting table 50 described above in a fixed state. The arm unit 110 can move the holding unit 130 within the distance range that the arm unit 110 can reach by moving the respective link members 112, 113, and 116.
 保持部130は一例としてサクショングリッパであり、図3に示すように物品を保持するための複数の吸着カップ132を備えている。保持部130は真空ポンプ等の不図示の吸引手段を備え、各吸着カップ132の内側に開口した吸引口から空気を吸引することにより、物品の表面を複数の吸着カップ132で保持することができる。 The holding unit 130 is, for example, a suction gripper, and is provided with a number of suction cups 132 for holding an object, as shown in FIG. 3. The holding unit 130 is provided with suction means (not shown), such as a vacuum pump, and is able to hold the surface of an object with the multiple suction cups 132 by sucking in air from suction ports opening on the inside of each suction cup 132.
 カメラ140は、一例として、保持部130のうち吸着カップ132が設けられている面に隣接する側面上に設置されている。カメラ140は、アーム部110の先端付近から、その下方の領域における物品移動装置100の周囲の環境情報を取得することに用いられる。カメラ140は、例えば、画素が二次元的に並んだ撮像画像(一例でRGB画像)を生成する撮像素子と、距離データを生成する距離検出デバイスである深度センサとを有するものであってもよい。深度センサは、対象物までの距離データを取得することができるものであれば特定の方式に限定されるものではない。例えば、ステレオレンズ方式や、LiDAR(Light Detection and Ranging)方式を利用可能である。深度センサは例えばDepth画像を生成するものであってもよい。カメラ140は例えば超音波素子を利用して距離データを取得するものであってもよい。 As an example, the camera 140 is installed on the side surface of the holding unit 130 adjacent to the surface on which the suction cup 132 is provided. The camera 140 is used to acquire environmental information about the surroundings of the item moving device 100 in the area below the tip of the arm unit 110. The camera 140 may have, for example, an image sensor that generates an image (an RGB image in one example) in which pixels are arranged two-dimensionally, and a depth sensor that is a distance detection device that generates distance data. The depth sensor is not limited to a specific type as long as it can acquire distance data to an object. For example, a stereo lens type or a LiDAR (Light Detection and Ranging) type can be used. The depth sensor may generate a depth image, for example. The camera 140 may acquire distance data using an ultrasonic element, for example.
<制御装置150の構成>
 図4に示すように、制御装置150は、プロセッサ155と、記憶部160と、操作部172と、表示部174と、入出力部176とを有している。図4では、制御装置150は単一の要素として描かれているが、制御装置150は必ずしも物理的に1つの要素である必要はなく、物理的に分離した複数の要素で構成されていてもよい。
<Configuration of control device 150>
As shown in Fig. 4, the control device 150 has a processor 155, a storage unit 160, an operation unit 172, a display unit 174, and an input/output unit 176. In Fig. 4, the control device 150 is depicted as a single element, but the control device 150 does not necessarily have to be a single physical element, and may be composed of multiple physically separated elements.
 操作部172は、ユーザからの入力を受け付けるための装置である。操作部172は、キーボード、マウス、タッチパネルの他、赤外線等を用いて位置と姿勢をトラッキングすることが可能でトリガーボタンなどを備えた卜ラッカーやVRコントローラ等と呼ばれるリモートコントローラ等の、コンピュータに対して入力を行うためのデバイスで構成され得る。操作部172は、マイク等の音声入力デバイスを有していてもよい。また、操作部172は、ユーザの動きを画像認識して識別するジェスチャ入力デバイスを有していてもよい。 The operation unit 172 is a device for receiving input from a user. The operation unit 172 may be configured with devices for inputting to a computer, such as a keyboard, a mouse, a touch panel, or a remote controller called a tracker or VR controller that is capable of tracking position and posture using infrared rays or the like and has a trigger button or the like. The operation unit 172 may also have a voice input device such as a microphone. The operation unit 172 may also have a gesture input device that uses image recognition to identify the user's movements.
 表示部174は、プロセッサ155により生成される表示画面を表示するディスプレイ装置であり、例えば、液晶ディスプレイや有機ELディスプレイ装置等の平面型の表示装置の他、ヘッドマウントディスプレイ(HMD)であってもよい。入出力部176は、有線通信又は無線通信によって物品移動装置100のアーム部110,保持部130、カメラ140に接続され、それらの構成要素との間で、制御信号の出力や取得情報の入力を行う。 The display unit 174 is a display device that displays the display screen generated by the processor 155, and may be, for example, a flat display device such as a liquid crystal display or an organic EL display device, or a head-mounted display (HMD). The input/output unit 176 is connected to the arm unit 110, the holding unit 130, and the camera 140 of the item moving device 100 by wired or wireless communication, and outputs control signals and inputs acquired information between these components.
 記憶部160は、ROM(Read Only Memory)、RAM(Random Access Memory)、HDD(Hard Disk Drive)あるいはSSD(Solid State Drive)等の一時的又は非一時的な記憶媒体を含む。記憶部160は、プロセッサ155が実行するコンピュータプログラムを記憶する。記憶部160に記憶されるコンピュータプログラムは、図7等を参照して後述する、プロセッサ155による物品移動装置100の制御方法を実施する命令を含む。記憶部160はさらに、カメラ140から受信した情報や、プロセッサ155による処理動作によって生成される各種データ(中間生成データを含む)を少なくとも一時的に記憶する。 The storage unit 160 includes a temporary or non-temporary storage medium such as a ROM (Read Only Memory), a RAM (Random Access Memory), a HDD (Hard Disk Drive), or an SSD (Solid State Drive). The storage unit 160 stores a computer program executed by the processor 155. The computer program stored in the storage unit 160 includes instructions for implementing a method of controlling the item moving device 100 by the processor 155, which will be described later with reference to FIG. 7 etc. The storage unit 160 further at least temporarily stores information received from the camera 140 and various data (including intermediately generated data) generated by the processing operations of the processor 155.
 プロセッサ155は、例えば、1又は2以上のCPU(Central Processing Unit)で構成される。プロセッサ155は、記憶部160に記憶されたコンピュータプログラムを実行することにより、主として、操作部172を介してユーザによって行われる入力に基づく処理、入出力部176の入出力制御、表示部174の表示制御等を司る。とりわけ、プロセッサ155は、操作部172によってユーザから入力されたユーザ入力に基づいて、アーム部110及び保持部130各駆動部(不図示)やカメラ140を動作させるための1つのあるいは複数の制御信号を生成する。 The processor 155 is composed of, for example, one or more CPUs (Central Processing Units). By executing a computer program stored in the memory unit 160, the processor 155 mainly controls processing based on input made by the user via the operation unit 172, input/output control of the input/output unit 176, display control of the display unit 174, etc. In particular, the processor 155 generates one or more control signals for operating the drive units (not shown) of the arm unit 110 and the holding unit 130 and the camera 140 based on user input input by the user via the operation unit 172.
 さらに、プロセッサ155は、ユーザに提示するUI(ユーザ・インターフェース)画面を生成し、表示部174に表示するように構成されている。UI画面(不図示)は、例えば、複数の選択肢をユーザに提供する選択ボタン表示を含む。さらにプロセッサ155は、物品移動装置100のカメラ140によって取得された物品移動装置100の周囲環境の現実世界の画像または動画に基づいて仮想世界(シミュレーション空間)の画像または動画を生成し、表示部174に表示する。プロセッサ155は、現実世界の画像または動画に基づいて仮想世界の画像または動画を生成する際に、例えば現実世界の座標系と仮想世界の座標系とを対応付けることにより、現実世界と仮想世界との相関関係を構築する。さらに、現実世界の画像または動画と仮想世界(シミュレーション空間)の画像または動画とを同時に表示部174に表示してもよい。 Furthermore, the processor 155 is configured to generate a UI (user interface) screen to be presented to the user and display it on the display unit 174. The UI screen (not shown) includes, for example, a selection button display that provides the user with multiple options. Furthermore, the processor 155 generates an image or video of a virtual world (simulation space) based on a real-world image or video of the surrounding environment of the item moving device 100 acquired by the camera 140 of the item moving device 100, and displays it on the display unit 174. When generating an image or video of the virtual world based on an image or video of the real world, the processor 155 establishes a correlation between the real world and the virtual world, for example, by associating a coordinate system of the real world with a coordinate system of the virtual world. Furthermore, an image or video of the real world and an image or video of the virtual world (simulation space) may be displayed on the display unit 174 simultaneously.
 物品移動装置100の周囲環境の現実世界の画像または動画に基づいて生成される仮想世界(シミュレーション空間)の画像または動画には、物品移動装置100の周囲環境に存在するオブジェク卜(物品10、パレット20、カート30、ベルトコンベア40等)も含まれる。プロセッサ320が現実世界の画像または動画に基づいて仮想世界の画像または動画を生成する際に現実世界と仮想世界との相関関係を構築することで、以下に詳しく説明するように、仮想世界におけるユーザの操作と、現実世界における物品移動装置100の動作とを関連付けることが可能となる。 The images or videos of the virtual world (simulation space) generated based on real-world images or videos of the surrounding environment of the item moving device 100 also include objects (items 10, pallets 20, carts 30, conveyor belt 40, etc.) that exist in the surrounding environment of the item moving device 100. By constructing a correlation between the real world and the virtual world when the processor 320 generates images or videos of the virtual world based on real-world images or videos, it becomes possible to associate the user's operations in the virtual world with the operation of the item moving device 100 in the real world, as described in detail below.
 さらには、制御装置150のプロセッサ155は、仮想世界(シミュレーション空間)の画像または動画に含まれるオブジェク卜に対応するモデルを生成するとともに、生成したモデルに対して当該オブジェク卜の属性情報を付与する処理を行うように構成されている。 Furthermore, the processor 155 of the control device 150 is configured to generate a model corresponding to an object contained in an image or video of the virtual world (simulation space), and to perform processing to add attribute information of the object to the generated model.
 図5及び図6は、制御装置150のプロセッサ155による、オブジェク卜に対応するモデルを介してオブジェク卜に対してアノテーション情報を付与する処理動作を説明する図である。 Figures 5 and 6 are diagrams explaining the processing operation of the processor 155 of the control device 150 to assign annotation information to an object via a model corresponding to the object.
 制御装置150のプロセッサ155は、ユーザが操作部172で行う所定の操作(コントローラの所定のボタンの押下等)を受け付けると、モデルを生成する対象のオブジェクトの種類をユーザが選択するためのUI画面を表示部174に表示する。図5は、プロセッサ155によって表示部174に表示されるオブジェクト種類選択のためのUI画面の一例を示している。図5に示すUI画面では、選択可能なオブジェク卜の種類として、「Box(梱包済み商品)」、「Conveyor(ベルトコンベア)」、「Pallet(パレット)」、「Cart(カート)」が提示され、ユーザが操作部172で「Conveyor(ベルトコンベア)」をポインティングしてそれを選択している状態が示されている。UI画面でのオブジェクト種類の選択操作は、例えば、操作部172と連動して移動する指示線の先を選択したいオブジェクト種類の選択肢の位置にポインティングして、操作部172の所定のボタンを押下することにより実行することができる。プロセッサ155は、オブジェクト種類の選択操作をこのようにして受け付けることにより、これに続いて以下に説明するように生成するモデルに関連付ける属性情報として、選択したオブジェクト種類の属性(上記の例では、属性としてConveyor(ベルトコンベア))を取得する。 When the processor 155 of the control device 150 receives a predetermined operation (such as pressing a predetermined button on the controller) performed by the user on the operation unit 172, it displays a UI screen on the display unit 174 for the user to select the type of object for which a model is to be generated. FIG. 5 shows an example of a UI screen for selecting an object type displayed on the display unit 174 by the processor 155. The UI screen shown in FIG. 5 shows a state in which "Box (packaged product)", "Conveyor (conveyor belt)", "Pallet (pallet)", and "Cart (cart)" are presented as selectable object types, and the user is pointing to "Conveyor (conveyor belt)" on the operation unit 172 to select it. The object type selection operation on the UI screen can be performed, for example, by pointing the tip of an instruction line that moves in conjunction with the operation unit 172 to the position of the option for the object type that is to be selected, and pressing a predetermined button on the operation unit 172. By accepting the object type selection operation in this manner, the processor 155 subsequently acquires the attributes of the selected object type (in the above example, the attribute is "Conveyor (conveyor belt)") as attribute information to be associated with the model to be generated, as described below.
 操作部172によるオブジェクト種類の選択を受け付けると、続いてプロセッサ155は、物品移動装置100の周囲環境の仮想世界(シミュレーション空間)を表示部174に表示する。このとき、表示部174には、選択したオブジェクト(上記の例では、Conveyor(ベルトコンベア))の輪郭形状・姿勢及び寸法を指定するためのモデルを同じ仮想世界(シミュレーション空間)内に表示する。モデルは、一例として、直方体形状を有する。 After receiving the selection of the object type from the operation unit 172, the processor 155 then displays a virtual world (simulation space) of the surrounding environment of the item moving device 100 on the display unit 174. At this time, the display unit 174 displays a model for specifying the contour shape, posture, and dimensions of the selected object (in the above example, a conveyor (belt conveyor)) in the same virtual world (simulation space). As an example, the model has a rectangular parallelepiped shape.
 続いてプロセッサ155は、ユーザによる操作部172からの操作入力に応じて、表示部174に表示されるシミュレーション空間内において、モデルの位置、寸法及び姿勢を変化させる。操作部172による操作入力は、例えば、モデルMdlをポインティングしてコントローラの所定のボタンを押下することでモデルMdlをドラッグし、モデルMdlを所望の位置及び姿勢に移動させることと、いわゆるバウンディングボックスの寸法を変更する要領で、モデルMdlのいずれかの辺か頂点をモデルMdlをポインティングしてコントローラの所定のボタンを押下した状態で移動させることと、によって行うことが可能である。 Then, the processor 155 changes the position, dimensions, and orientation of the model in the simulation space displayed on the display unit 174 in response to operation input from the operation unit 172 by the user. Operation input from the operation unit 172 can be performed, for example, by pointing at the model Mdl and pressing a specified button on the controller to drag the model Mdl and move the model Mdl to the desired position and orientation, or by pointing at the model Mdl and pressing a specified button on the controller to move any edge or vertex of the model Mdl in a manner similar to changing the dimensions of a so-called bounding box.
 図6は、ユーザによる操作部172からの操作入力に応じて、表示部174に表示されるシミュレーション空間内においてモデルの位置、寸法及び姿勢を変化させる様子を示す図である。 FIG. 6 shows how the position, dimensions, and orientation of the model are changed in the simulation space displayed on the display unit 174 in response to operation input from the operation unit 172 by the user.
 図6(a)は、操作部172からの操作入力に応じてモデルの位置、寸法及び姿勢を変化させている様子の一場面を示す図である。図6(a)には、シミュレーション空間内に表示されたモデルMdlを、同じシミュレーション空間内に表示されたコンベヤのスキャン画像の位置におおまかに位置合わせされた状態が示されている。その状態から、ユーザによる操作部172からの操作入力に応じて、モデルMdlの位置、姿勢及び各辺の寸法がさらに整えられると、図6(b)に示すように、モデルMdlの輪郭がスキャン画像のコンベヤの外形輪郭と概ね一致した状態になる。これにより、モデルMdlの生成が終了する。 FIG. 6(a) is a diagram showing a scene in which the position, dimensions, and orientation of the model are changed in response to operation input from the operation unit 172. FIG. 6(a) shows a state in which the model Mdl displayed in the simulation space is roughly aligned with the position of a scanned image of the conveyor displayed in the same simulation space. From this state, when the position, orientation, and dimensions of each side of the model Mdl are further adjusted in response to operation input from the operation unit 172 by the user, the contour of the model Mdl roughly matches the external contour of the conveyor in the scanned image, as shown in FIG. 6(b). This completes the generation of the model Mdl.
 プロセッサ155は、ユーザにより操作部172においてモデル生成の入力操作(例えば、コントローラの所定ボタンの押下)を受け付けると、その時点で指定されているモデルMdlの位置、姿勢及び各辺の寸法を、対応するオブジェクト(この例では、ベルトコンベアコンベヤ)の位置、姿勢及び各辺の寸法として取得する。このように生成されたモデルMdlのデータは、制御装置150の記憶部160に格納される。 When the processor 155 receives a model generation input operation (e.g., pressing a specific button on the controller) from the user on the operation unit 172, it obtains the position, orientation, and dimensions of each side of the model Mdl specified at that time as the position, orientation, and dimensions of each side of the corresponding object (in this example, a conveyor belt). The data of the model Mdl generated in this way is stored in the memory unit 160 of the control device 150.
 上述したような一連の処理により、プロセッサ155は、仮想世界(シミュレーション空間)の座標系におけるモデルMdlを介して、対応するオブジェクトに対して属性情報と、位置・姿勢及び寸法(縦・横・高さ寸法)に関する情報とを含むアノテーション情報を付与する。プロセッサ155は、このように付与されたアノテーション情報に基づいて、現実世界の座標系における対応するオブジェクト(ベルトコンベア)の位置、姿勢及び寸法を算出して認識することができる。 Through the series of processes described above, the processor 155 assigns annotation information including attribute information and information regarding position, orientation, and dimensions (length, width, and height dimensions) to the corresponding object via the model Mdl in the coordinate system of the virtual world (simulation space). Based on the annotation information assigned in this way, the processor 155 can calculate and recognize the position, orientation, and dimensions of the corresponding object (conveyor belt) in the coordinate system of the real world.
 また、プロセッサ155は、カメラ140によって撮影された画像に基づき、任意の画像認識技術及び/又は学習済みモデルを用いて、画像内に存在するオブジェクトを検出するように構成されている。一例として、プロセッサ155は、パレット20上に積載された複数の物品10をそれらの上方からカメラ140によって撮像された画像に基づき、上方に露出している各々の物品10の上面の輪郭形状を検出することができる。学習済みモデルは、上記のようにパレット20上に積載された複数の物品10をそれらの上方からカメラ140によって撮像した様々な画像の画像データを用いて、例えば、各層にニューロンを含む複数の層で構成されるニューラルネットワークで機械学習を実行して生成することができる。そのようなニューラルネットワークとして、例えば20層以上を備えた畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)のようなディープニューラルネットワークを用いてもよい。このようなディープニューラルネットワークを用いた機械学習は、ディープラーニングと称される。あるいは、上記のような学習済みモデルは、主に自己注意メカニズムに基づくディープニューラルネットワークの一種であるTransformerをコンピュータビジョンの分野に適用した「Visual Transformer」を用いて生成することもできる。このようにして生成された学習済みモデルは、記憶部160に記憶される。 The processor 155 is also configured to detect objects present in an image captured by the camera 140 using any image recognition technology and/or a trained model. As an example, the processor 155 can detect the contour shape of the upper surface of each of the items 10 exposed above based on an image captured by the camera 140 from above the multiple items 10 loaded on the pallet 20. The trained model can be generated by performing machine learning with a neural network composed of multiple layers including neurons in each layer using image data of various images captured by the camera 140 from above the multiple items 10 loaded on the pallet 20 as described above. As such a neural network, a deep neural network such as a convolutional neural network (CNN) having 20 or more layers may be used. Machine learning using such a deep neural network is called deep learning. Alternatively, the trained model described above can be generated using a "Visual Transformer," which applies the Transformer, a type of deep neural network based mainly on a self-attention mechanism, to the field of computer vision. The trained model generated in this way is stored in the storage unit 160.
<物品移動装置100による物品移動動作>
 次に、主に図7を参照して、本実施形態の物品移動装置100による物品移動動作に関する一連の処理及び動作について説明する。図7は、物品移動装置100による物品移動動作を示すフローチャートである。
<Item Moving Operation by Item Moving Device 100>
Next, a series of processes and operations related to the article moving operation by the article moving device 100 of this embodiment will be described mainly with reference to Fig. 7. Fig. 7 is a flowchart showing the article moving operation by the article moving device 100.
 最初にステップS11において、物品移動装置100のプロセッサ155は、保持部130に設置されたカメラ140によって物品移動装置100の周囲環境情報を取得する。 First, in step S11, the processor 155 of the item moving device 100 acquires information about the surrounding environment of the item moving device 100 using the camera 140 installed in the holding unit 130.
 物品移動装置100は、AGV60によって移動されて倉庫内での配置位置が適宜変更される。移動された物品移動装置100の周囲に、物品10を積載したパレット20と、それらの物品10の移動先であるカート30、ベルトコンベア40等が配置される。プロセッサ155は、ステップS11における物品移動装置100の周囲環境情報を取得する動作として、第1の動作と第2の動作とを物品移動装置100に実行させる。図8は、図7に示したステップS11における第1の動作及び第2の動作を説明する図である。 The item moving device 100 is moved by the AGV 60, and its location within the warehouse is changed as appropriate. Around the moved item moving device 100, pallets 20 loaded with items 10, carts 30 to which the items 10 are to be moved, belt conveyor 40, etc. are arranged. The processor 155 causes the item moving device 100 to execute a first operation and a second operation as an operation for acquiring ambient environment information of the item moving device 100 in step S11. Figure 8 is a diagram for explaining the first operation and the second operation in step S11 shown in Figure 7.
 第1の動作において、プロセッサ155は、図8(a)に示すように物品移動装置100のアーム部110の各リンク部材112,113,116をベース部120からせり出すように直線状に伸ばし、かつ保持部130に設置されたカメラ140の撮像方向が下方前側を向くように、各関節部のアクチュエータ(不図示)を動作させる。その後、プロセッサ155は、カメラ140による撮像を行いながら、アーム部110全体をベース部120に対して所定の回転角度(最大で1周分)だけ回転させる。このような第1の動作によれば、より高い位置に配置されたカメラ140により、物品移動装置100から比較的遠い領域を含む第1の範囲における周囲環境情報の取得が行われる。 In the first operation, the processor 155 extends each of the link members 112, 113, and 116 of the arm unit 110 of the item moving device 100 in a straight line so that they jut out from the base unit 120 as shown in FIG. 8(a), and operates the actuators (not shown) of each joint so that the imaging direction of the camera 140 installed in the holding unit 130 faces downward and forward. The processor 155 then rotates the entire arm unit 110 a predetermined rotation angle (maximum one revolution) relative to the base unit 120 while capturing an image with the camera 140. According to this first operation, the camera 140, which is positioned at a higher position, obtains surrounding environment information in a first range that includes an area relatively far from the item moving device 100.
 それに続く第2の動作において、プロセッサ155は、図8(b)に示すようにアーム部110の各リンク部材112,113,116が屈曲し、かつ保持部130に設置されたカメラ140の撮像方向が下方前側を向くように、各関節部のアクチュエータ(不図示)を動作させる。その後、プロセッサ155は、カメラ140による撮像を行いながら、アーム部110全体をベース部120に対して所定の回転角度(最大で1周分)だけ回転させる。このような第2の動作によれば、より低い位置に配置されたカメラ140により、物品移動装置100から比較的近い領域を含む第2の範囲における周囲環境情報の取得が行われる。 In the subsequent second operation, the processor 155 operates the actuators (not shown) of each joint so that the link members 112, 113, and 116 of the arm unit 110 are bent as shown in FIG. 8(b) and the imaging direction of the camera 140 installed in the holding unit 130 faces downward and forward. The processor 155 then rotates the entire arm unit 110 a predetermined rotation angle (maximum one revolution) relative to the base unit 120 while capturing an image with the camera 140. According to this second operation, the camera 140, which is positioned at a lower position, obtains surrounding environment information in a second range that includes an area relatively close to the item moving device 100.
 このように、上記の第1の動作及び第2の動作を含むステップS11によれば、物品移動装置100の周囲の比較的広い第1の範囲における周囲環境情報を取得できるとともに、物品移動装置100に比較的近い第2の範囲に存在するオブジェクトについてはより高い解像度で周囲環境情報を取得することができる。 In this way, according to step S11, which includes the above-mentioned first and second operations, ambient environment information can be obtained in a relatively wide first range around the item moving device 100, and ambient environment information can be obtained with higher resolution for objects present in a second range that is relatively close to the item moving device 100.
 上記では物品移動装置100の周囲環境情報を取得する動作が第1及び第2の動作を含む例を示したが、周囲環境情報を取得する動作として第1及び第2の動作のいずれか一方の動作のみ(すなわち、1回の取得動作のみ)を実行することとしてもよい。また、周囲環境情報を取得する動作はプロセッサ155による制御に基づいて自動的に実行されてもよいし、あるいは、操作部172を介したユーザ入力操作によって手動でアーム部110及び保持部130を操作して実行されてもよい。後者の場合は、周囲環境情報を取得したい対象物の周囲にカメラ140を集中的に向けて情報取得を行うことにより、周囲全体の情報を自動的に取得する場合に比べて少ないデータ量でも、物品移動装置100の動作に必要な情報を取得することが可能である。 In the above, an example was given in which the operation of acquiring the surrounding environment information of the item moving device 100 includes the first and second operations, but it is also possible to execute only one of the first and second operations (i.e., only one acquisition operation) as the operation of acquiring the surrounding environment information. In addition, the operation of acquiring the surrounding environment information may be executed automatically based on the control of the processor 155, or may be executed by manually operating the arm unit 110 and the holding unit 130 by a user input operation via the operation unit 172. In the latter case, by concentrating the camera 140 around the object about which the surrounding environment information is to be acquired and acquiring information, it is possible to acquire information necessary for the operation of the item moving device 100 even with a smaller amount of data than when information about the entire surroundings is automatically acquired.
 次にステップS12において、プロセッサ155は、上記ステップS11で取得した周囲環境情報に基づいて、物品移動装置100の周囲環境を再現した仮想世界(シミュレーション空間)を生成し、表示部174に表示する。仮想世界には、現実世界における物品移動装置100の周囲の風景(ロジスティクス倉庫の床面等)に加え、少なくとも物品移動装置100がアクセス可能な範囲に存在する現実世界の各オブジェク卜が表示される。オブジェク卜は、カメラ140によって得られた現実世界のオブジェク卜の二次元あるいは三次元画像、深度マップあるいはポイン卜クラウド等による表現であってもよい。あるいは、オブジェク卜を表すコンビュータ・グラフィックスによって表現されてもよい。 Next, in step S12, the processor 155 generates a virtual world (simulation space) that reproduces the surrounding environment of the item moving device 100 based on the surrounding environment information acquired in step S11, and displays it on the display unit 174. In addition to the scenery around the item moving device 100 in the real world (such as the floor of a logistics warehouse), the virtual world displays each object in the real world that exists at least within a range accessible to the item moving device 100. The objects may be represented by two-dimensional or three-dimensional images of the real-world objects obtained by the camera 140, depth maps, point clouds, or the like. Alternatively, they may be represented by computer graphics that represent the objects.
 次にステップS13において、プロセッサ155は、仮想世界(シミュレーション空間)に表示されたオブジェクトに対してアノテーション情報を付与する。プロセッサ155は、図5及び図6を参照して上記に説明したようにユーザが操作部172を操作して入力する、アノテーション情報を付与する対象のオブジェクトの種類の選択情報(図5参照)と、仮想世界において当該オブジェクトに対して決定されるモデルの位置・姿勢・寸法に関する情報とに基づいて、当該オブジェクトに対してアノテーション情報を付与する。 Next, in step S13, the processor 155 assigns annotation information to the object displayed in the virtual world (simulation space). The processor 155 assigns annotation information to the object based on selection information (see FIG. 5) regarding the type of object to which annotation information is to be assigned, which is input by the user operating the operation unit 172 as described above with reference to FIGS. 5 and 6, and information regarding the position, orientation, and dimensions of the model determined for the object in the virtual world.
 ステップS13によるオブジェクトに対するアノテーション情報の付与は、仮想世界に表示されている種々のオブジェクト(物品10、パレット20、カート30、ベルトコンベア40等)についてそれぞれ行われる。特に、パレット20の上に複数の物品10が積載されている場合には、各々の物品10について個別にアノテーション情報付与を行ってもよく、あるいは、パレット20の上に積載される複数の物品10が全て同じ大きさ(縦・横・高さの各部寸法)であるケースでは、パレット20上の少なくとも1つの物品10についてアノテーション情報(位置・姿勢・寸法に関する情報)を付与することとしてもよい。後者の場合には、任意の画像認識技術及び/又は学習済みモデルを用いて検出される他の物品10には、少なくとも1つの物品10について付与されたアノテーション情報と同じアノテーション情報がプロセッサ155によって付与される。 The annotation information is added to the objects in step S13 for each of the various objects (items 10, pallets 20, carts 30, conveyor belts 40, etc.) displayed in the virtual world. In particular, when multiple items 10 are loaded on the pallet 20, annotation information may be added to each item 10 individually, or when multiple items 10 loaded on the pallet 20 are all the same size (length, width, and height), annotation information (information regarding position, orientation, and dimensions) may be added to at least one item 10 on the pallet 20. In the latter case, the processor 155 adds the same annotation information as the annotation information added to at least one item 10 to other items 10 detected using any image recognition technology and/or trained model.
 このステップS13により、プロセッサ155は、仮想世界の座標系におけるモデルMdlを介して、種々のオブジェクトに対して、属性情報と、位置・姿勢及び寸法に関する情報とを含むアノテーション情報を付与する。このようにしてオブジェク卜にアノテーション情報を付与することにより、表示部174に表示される仮想空間内のオブジェク卜が、ある体積空間を占める単なる物体ではなく、付与されたアノテーション情報によって特定されるオブジェク卜を表すものであることがプロセッサ155によって認識され、プロセッサ155は、このように付与されたアノテーション情報に基づいて、現実世界の座標系における対応する種々のオブジェクトの位置、姿勢及び寸法を算出して認識することが可能となる。換言すれば、プロセッサ155は、現実世界の物品移動装置100の周囲に、どの種類のオブジェクトが、どのような位置、姿勢及び寸法で存在しているかを、アノテーション情報に基づいて認識する。各オブジェクトに付与されたアノテーション情報は、少なくとも一時的に記憶部160に記憶される。 In step S13, the processor 155 assigns annotation information including attribute information and information regarding position, orientation, and size to various objects via the model Mdl in the coordinate system of the virtual world. By assigning annotation information to the objects in this manner, the processor 155 recognizes that the objects in the virtual space displayed on the display unit 174 are not simply objects occupying a certain volumetric space, but represent objects specified by the assigned annotation information, and the processor 155 is able to calculate and recognize the positions, orientations, and sizes of various corresponding objects in the coordinate system of the real world based on the annotation information assigned in this manner. In other words, the processor 155 recognizes, based on the annotation information, what types of objects exist around the item moving device 100 in the real world, and in what positions, orientations, and sizes. The annotation information assigned to each object is stored at least temporarily in the storage unit 160.
 次にステップS14において、プロセッサ155は、アノテーション情報に基づいて認識する各オブジェクトについて、物品移動装置100のアーム部110によって保持部130がアクセス可能な範囲に存在しているかどうかを判定する。物品移動装置100のアーム部110によって保持部130がアクセス可能な範囲に関する情報は、既知情報として記憶部160に予め記憶されている。アクセス可能範囲に関する情報は、物品移動装置100を中心とした平面情報(例えば、円形形状や扇形形状等)であってもよいし、そのような平面情報に高さ情報を加えた立体情報(例えば、円柱形状や扇形の立方体形状、半球形状等)であってもよい。そのようなアクセス可能範囲に関する情報は、物品移動装置100のアーム部110及び保持部130の寸法・可動範囲等に基づいて定められる。 Next, in step S14, the processor 155 determines whether each object recognized based on the annotation information is present within a range accessible to the holding unit 130 by the arm unit 110 of the item moving device 100. Information regarding the range accessible to the holding unit 130 by the arm unit 110 of the item moving device 100 is pre-stored in the storage unit 160 as known information. Information regarding the accessible range may be planar information (e.g., circular or sectorial) centered on the item moving device 100, or may be three-dimensional information (e.g., cylindrical, sectorial, or hemispherical) in which height information is added to such planar information. Such information regarding the accessible range is determined based on the dimensions and movable range of the arm unit 110 and holding unit 130 of the item moving device 100.
 プロセッサ155は、各オブジェクトについて、1)オブジェクト全体がアクセス可能範囲内に存在する、2)オブジェクトの少なくとも一部がアクセス可能範囲外に存在する、のどちらに該当するかをアクセス可能範囲に関する情報に基づいて判定する。判定結果は、各オブジェクトに付与されたアノテーション情報にそれぞれ関連付けて少なくとも一時的に記憶部160に記憶される。 The processor 155 determines for each object whether 1) the entire object is within the accessible range, or 2) at least a portion of the object is outside the accessible range, based on information about the accessible range. The determination results are stored at least temporarily in the storage unit 160 in association with the annotation information assigned to each object.
 次にステップS15において、プロセッサ155は、物品移動装置100で移動させるオブジェクト(物品10)を指定する第1の入力と、物品10を移動させる先のオブジェクト上の位置又は領域を指定する第2の入力とを受け付ける。 Next, in step S15, the processor 155 receives a first input that specifies the object (item 10) to be moved by the item moving device 100, and a second input that specifies the position or area on the object to which the item 10 is to be moved.
 物品移動装置100で移動させるオブジェクト(物品10)を指定する第1の入力は、例えば、操作部172を操作するユーザが、仮想空間内に表示される各物品10に対応するモデルMdlを指定する入力操作を行うことで実行することができる。各物品10に対応するモデルMdlの指定は、例えば、各モデルMdlを個別にポインティングしてコントローラの所定のボタンを押下して決定する操作によって行うことができる。あるいは、複数の物品10に対応する複数のモデルMdlを立体的なバウンディングボックスで囲う操作によって行うことも可能である。 The first input for specifying an object (item 10) to be moved by the item moving device 100 can be executed, for example, by a user operating the operation unit 172 performing an input operation for specifying a model Mdl corresponding to each item 10 displayed in the virtual space. The model Mdl corresponding to each item 10 can be specified, for example, by pointing to each model Mdl individually and pressing a specified button on the controller to confirm. Alternatively, it can also be specified by an operation for surrounding multiple models Mdl corresponding to multiple items 10 with a three-dimensional bounding box.
 図9は、物品を移動させる先のオブジェクト上の位置又は領域が指定された状態の一例を示す図である。図9には、物品10を移動させる先のオブジェクト上の位置又は領域として、ベルトコンベア40の中央部分が指定された状態が示されている。図9において、ベルトコンベア40を示すモデルMdl_1の中央部分の上に、物品移動先を示す立方体形状のマーカPが配置されている。マーカPはモデルMdl_1の傾きに沿って傾いた状態で配置されており、これは、現実世界におけるベルトコンベア40の上のマーカPで指示された位置に、物品10をマーカPと同じ姿勢で運ぶことを指定することを意味している。なお、図9にはパレット20を示すモデルMdl_2も示されている。 FIG. 9 is a diagram showing an example of a state in which a position or area on an object to which an item is to be moved is specified. FIG. 9 shows a state in which the center of the conveyor belt 40 is specified as the position or area on the object to which the item 10 is to be moved. In FIG. 9, a cube-shaped marker P indicating the destination of the item is placed on the center of the model Mdl_1 representing the conveyor belt 40. The marker P is placed at an angle that follows the angle of the model Mdl_1, which means that the item 10 is specified to be transported in the same orientation as the marker P to the position indicated by the marker P on the conveyor belt 40 in the real world. Note that FIG. 9 also shows a model Mdl_2 representing a pallet 20.
 また、物品10を移動させる先の位置又は領域を指定する第2の入力は、例えば、操作部172を操作するユーザが、仮想空間内に表示される移動先のオブジェクトに対応するモデルMdlを指定する入力操作を行うことで実行することができる。より具体的には、物品10の移動先の位置又は領域の指定は、ベルトコンベア40の特定の位置に各物品10を移動させる場合には、ベルトコンベア40に対応するモデルMdl上で位置を指定したり、パレット20やカート30の上の領域に各物品10を移動させる場合には、それらに対応するモデルMdl上の領域をバウンディングボックスで指定する操作によって行うことが可能である。 The second input specifying the destination position or area to which the item 10 is to be moved can be executed, for example, by a user operating the operation unit 172 performing an input operation to specify a model Mdl corresponding to a destination object displayed in the virtual space. More specifically, the destination position or area to which the item 10 is to be moved can be specified by specifying a position on the model Mdl corresponding to the belt conveyor 40 when each item 10 is to be moved to a specific position on the belt conveyor 40, or by specifying the corresponding area on the model Mdl with a bounding box when each item 10 is to be moved to an area on the pallet 20 or cart 30.
 次にステップS16において、プロセッサ155は、上記の第1及び第2の入力に基づき、指定された移動対象のオブジェクト(物品10)と、指定された移動先のオブジェクト上の位置又は領域に関する各オブジェクト(パレット20、カート30、ベルトコンベア40等)がアクセス可能範囲内に存在するかどうかを判定する。この判定処理は、これらの指定されたオブジェクトに付与されたアノテーション情報に関連付けられた上記の判定結果に基づいて行うことができる。プロセッサ155は、これらの指定されたオブジェクトのすべてについてそれぞれの全体がアクセス可能範囲内に存在すると判定した場合(Y)には、後述するステップS17の処理に進む。 Next, in step S16, the processor 155 determines, based on the first and second inputs, whether the specified object to be moved (item 10) and each object related to the specified position or area on the object to be moved (pallet 20, cart 30, conveyor belt 40, etc.) are within the accessible range. This determination process can be performed based on the above determination results associated with the annotation information assigned to these specified objects. If the processor 155 determines that the entirety of each of these specified objects is within the accessible range (Y), it proceeds to the processing of step S17 described below.
 さもなければ(N)、指定された物品10の移動動作を安全に完遂できない可能性があるため、プロセッサ155は、全体がアクセス可能範囲内に存在していないと判定したオブジェクトに関し、「○○(オブジェクト)がアクセス可能な範囲に無いため、移動動作を実行できません。○○(オブジェクト)をロボットに近い位置に移動させて、処理を最初から実行してください。」のようなメッセージを表示部174に表示させて、処理を終了する。 Otherwise (N), since there is a possibility that the movement operation of the specified item 10 cannot be completed safely, the processor 155 displays a message on the display unit 174 for the object that it has determined is not entirely within the accessible range, such as "The movement operation cannot be performed because XX (object) is not within the accessible range. Please move XX (object) to a position closer to the robot and run the process from the beginning.", and ends the process.
 最後にステップS17において、プロセッサ155は、ステップS15において受け付けたユーザ入力に基づいて、指定された物品10を、指定された位置又は領域に移動させるように物品移動装置100(特に、アーム部110及び保持部130)を動作させるためのモーションプランニングを行い、物品移動装置100を動作させる動作命令を生成し、その動作命令に基づいて物品移動装置100を動作させ、指定された物品10を指定されたオブジェクト上の位置又は領域に移動させる。プロセッサ155は、指定された各物品10を全て指定された位置又は領域に移動させるまで、物品10のピックアンドプレース動作を物品移動装置100に繰り返し実行させる。 Finally, in step S17, the processor 155 performs motion planning for operating the item moving device 100 (particularly the arm unit 110 and the holding unit 130) to move the specified item 10 to the specified position or area based on the user input received in step S15, generates an operation command for operating the item moving device 100, and operates the item moving device 100 based on the operation command to move the specified item 10 to the position or area on the specified object. The processor 155 causes the item moving device 100 to repeatedly perform the pick-and-place operation of the item 10 until all of the specified items 10 have been moved to the specified positions or areas.
 物品移動装置100による物品10の移動動作において、保持部130で物品10をピックアップする動作は、一例として、積載された複数の物品10をそれらの上方からカメラ140によって撮像した画像に基づき、上方に露出している個々の物品10の上面の輪郭形状を検出し、検出された個々の物品10の輪郭形状に基づいて保持部130の吸着カップ132を各物品10の上面に対して適切に位置合わせして、個々の物品10を吸着カップ132で吸着して保持するように、プロセッサ155によって制御される。 In the operation of moving the items 10 by the item moving device 100, the operation of picking up the items 10 with the holding unit 130 is controlled by the processor 155, for example, to detect the contour shape of the upper surface of each of the items 10 exposed above based on an image of the multiple loaded items 10 captured from above by the camera 140, and to appropriately align the suction cups 132 of the holding unit 130 with the upper surface of each item 10 based on the detected contour shape of each item 10, and to suction and hold each item 10 with the suction cups 132.
 さらに、保持部130で物品10をピックアップする動作の後に、プロセッサ155は、ピックアップした物品10の高さ寸法を取得する動作を実行してもよい。例えば、物品移動装置100のベース部120の上にカメラ140と同様の機能を有する追加カメラ(不図示)を設置し、プロセッサ155は、保持部130が物品10をピックアップした後にその物品10を追加カメラの前に一度移動させ、保持部130に保持された状態の物品10を側方から追加カメラによって撮像する動作を実行する。そして、プロセッサ155が、その撮像画像に任意の画像認識技術を適用して当該物品10の高さ寸法を求める処理を実行することにより、物品10の高さ寸法を取得することができる。 Furthermore, after the operation of picking up the item 10 with the holding unit 130, the processor 155 may execute an operation of acquiring the height dimension of the picked up item 10. For example, an additional camera (not shown) having the same function as the camera 140 is installed on the base unit 120 of the item moving device 100, and the processor 155 executes an operation of moving the item 10 once in front of the additional camera after the holding unit 130 picks up the item 10, and capturing an image of the item 10 held by the holding unit 130 from the side with the additional camera. The processor 155 then executes a process of applying any image recognition technology to the captured image to determine the height dimension of the item 10, thereby acquiring the height dimension of the item 10.
 一方、物品10を指定された位置又は領域に移動させるピックアンドプレース動作は、一例として、各物品10を指定された位置(例えば、上記の例におけるベルトコンベア40の中央部分)に繰り返し移動させて吸着カップ132による把持をリリース動作を繰り返し行ったり、パレット20やカート30の指定された領域の上に順次積載する動作を繰り返し行うように、プロセッサ155によって制御される。特に、後者のように指定領域の上に物品10を順次積載する動作は、移動させる物品10の上面の輪郭形状に基づき、それらの輪郭形状によって指定領域が充填されるように物品10の積載位置及び姿勢を求め、その積載位置及び姿勢に従って物品10を指定領域に順次積載するように、プロセッサ155によって制御される。 On the other hand, the pick-and-place operation of moving the items 10 to a designated position or area is controlled by the processor 155, for example, to repeatedly move each item 10 to a designated position (e.g., the center part of the belt conveyor 40 in the above example) and repeatedly grip and release the items 10 with the suction cups 132, or to repeatedly load the items 10 onto a designated area of the pallet 20 or cart 30. In particular, the latter operation of sequentially loading items 10 onto a designated area is controlled by the processor 155 to determine the loading position and orientation of the items 10 based on the contour shape of the top surface of the items 10 to be moved so that the designated area is filled by the contour shape, and to sequentially load the items 10 into the designated area according to the loading position and orientation.
 なお、各物品10には上述したようにアノテーション情報が付与されており、そのアノテーション情報には特に物品10の高さ寸法に関する情報も含まれる。さらに、物品移動装置100に設置した追加カメラによって各物品10の高さ寸法に関する情報を取得することもできる。このような物品の高さ寸法情報に基づき、保持部130によって保持された物品10を指定位置に移動させる際に、保持部130と指定位置(パレット20、カート30、ベルトコンベア40の上面等)との間に物品10の高さ寸法よりも大きい間隔を保つように、アーム部110及び保持部130の動作がプロセッサ155によって制御される。これにより、保持部130によって保持された物品10が指定位置のパレット20、カート30、ベルトコンベア40等の上に押し付けられて、物品10あるいは指定位置のパレット20、カート30、ベルトコンベア40等が損傷する事態を防ぐことができる。 As described above, each item 10 is provided with annotation information, and the annotation information includes information on the height dimension of the item 10 in particular. Furthermore, information on the height dimension of each item 10 can also be obtained by an additional camera installed in the item moving device 100. Based on such item height dimension information, when the item 10 held by the holding unit 130 is moved to a designated position, the operation of the arm unit 110 and the holding unit 130 is controlled by the processor 155 so that a gap larger than the height dimension of the item 10 is maintained between the holding unit 130 and the designated position (the upper surface of the pallet 20, cart 30, belt conveyor 40, etc.). This makes it possible to prevent the item 10 held by the holding unit 130 from being pressed against the pallet 20, cart 30, belt conveyor 40, etc. at the designated position, which would damage the item 10 or the pallet 20, cart 30, belt conveyor 40, etc. at the designated position.
 また、特に、物品10を他のパレット20やカート30の上に載せ替える場合において、上述したように各々の物品10について個別にアノテーション情報付与が付与されている場合には、プロセッサ155は、各物品10の大きさ(縦・横・高さの各部寸法)を考慮して、移動先のパレット20やカート30上の空間の一部又は全部が移動対象の複数の物品10の体積で充填されるようにそれらの物品10を移動させるモーションプランニングを行い、物品移動装置100による複数の物品10の移動動作を実行させる。あるいは、移動対象の複数の物品10が全て同じ大きさ(縦・横・高さの各部寸法)であるケースでは、プロセッサ155は、上記のように少なくとも1つの物品10について付与されたアノテーション情報(特に、位置・姿勢・寸法に関する情報)に基づき、移動元にある各物品10が付与された同じ大きさを有すると認識して、移動先のパレット20やカート30上の空間の一部又は全部が移動対象の複数の物品10の体積で充填されるようにそれらの物品10を移動させるモーションプランニングを行い、物品移動装置100による複数の物品10の移動動作を実行させる。 In particular, when transferring items 10 onto another pallet 20 or cart 30, if annotation information has been individually assigned to each item 10 as described above, the processor 155 takes into account the size (length, width, and height dimensions) of each item 10, performs motion planning to move those items 10 so that some or all of the space on the destination pallet 20 or cart 30 is filled with the volume of the multiple items 10 to be moved, and causes the item moving device 100 to execute the movement operation of the multiple items 10. Alternatively, in cases where the multiple items 10 to be moved are all the same size (length, width, and height dimensions), the processor 155 recognizes that each item 10 at the source has the same size based on the annotation information (particularly information regarding position, orientation, and size) assigned to at least one item 10 as described above, performs motion planning to move the items 10 so that some or all of the space on the destination pallet 20 or cart 30 is filled with the volume of the multiple items 10 to be moved, and causes the item moving device 100 to execute the movement operation of the multiple items 10.
 以上説明したように、本実施形態の物品移動装置100によれば、現実世界の物品移動装置100の周囲環境を再現する仮想世界(シミュレーション空間)内において、現実世界の物品移動装置100の周囲に存在するオブジェクト(移動対象の物品10、移動先のパレット20・カート30・ベルトコンベア40等)に対して、それぞれの属性情報及び位置・姿勢・寸法に関する情報を含むアノテーション情報を付与することで、現実世界に存在するそれらのオブジェクトに関する属性及び位置・姿勢・寸法を物品移動装置100に認識させることができる。そのため、例えば、ロジスティクス倉庫内において物品移動装置100をAGV60で適宜移動させ、移動後の物品移動装置100の周囲にオブジェクトを配置して物品移動動作を行うような運用がなされる場合において、移動後の物品移動装置100の周囲環境を物品移動装置100に容易かつ迅速に認識させることができる。 As described above, according to the item moving device 100 of this embodiment, in a virtual world (simulation space) that reproduces the surrounding environment of the item moving device 100 in the real world, annotation information including attribute information and information regarding the position, attitude, and dimensions of objects (item 10 to be moved, pallet 20, cart 30, belt conveyor 40, etc. to be moved) that exist around the item moving device 100 in the real world can be added, thereby allowing the item moving device 100 to recognize the attributes, position, attitude, and dimensions of those objects that exist in the real world. Therefore, for example, in a case where the item moving device 100 is moved appropriately by the AGV 60 in a logistics warehouse, and objects are placed around the item moving device 100 after the movement to perform an item moving operation, the item moving device 100 can easily and quickly recognize the surrounding environment of the item moving device 100 after the movement.
 さらに、本実施形態の物品移動装置100は、移動させる物品10を指定するユーザ入力と、移動先の位置又は領域を指定するユーザ入力とを受け付けた後に、指定された物品10の移動動作に関連するオブジェクトが物品移動装置100のアクセス可能範囲内に存在するかどうかを判定し、いずれかのオブジェクトがアクセス可能範囲内に存在しない場合には物品移動動作を実行しないように構成されている。そのため、いずれかのオブジェクトがアクセス可能範囲内に存在しない場合に物品移動動作を実行した場合に生じ得るインシデント(物品10の落下や崩落等による物品10や他のオブジェクトの損傷等)が生じることを未然に防ぐことができる。 Furthermore, the item moving device 100 of this embodiment is configured to, after receiving a user input specifying the item 10 to be moved and a user input specifying the destination position or area, determine whether an object related to the movement operation of the specified item 10 is present within an accessible range of the item moving device 100, and not execute the item moving operation if any of the objects are not present within the accessible range. This makes it possible to prevent incidents (such as damage to the item 10 or other objects due to the item 10 falling or collapsing, etc.) that may occur when an item moving operation is executed when any of the objects are not present within the accessible range.
[変形例]
 上述した実施形態では、図7に示したステップS16の処理において指定されたオブジェクトが物品移動装置100のアクセス可能範囲内に無い(N)と判定された場合には、物品の移動動作(ステップS17)を実行することなく処理を終了することを説明した。これに対し本変形例は、指定されたオブジェクトが物品移動装置100のアクセス可能範囲内に無いと判定された場合においても、物品の移動動作を実行することを可能にする手段を提供する。
[Modification]
In the above embodiment, it has been described that the process is terminated without executing the operation of moving the item (step S17) when it is determined in the process of step S16 shown in Fig. 7 that the designated object is not within the accessible range of the item moving device 100 (N). In contrast, this modified example provides a means for enabling the operation of moving the item to be executed even when it is determined that the designated object is not within the accessible range of the item moving device 100.
 本変形例では、指定されたオブジェクトが物品移動装置100のアクセス可能範囲内に無いと判定された場合には、アクセス可能範囲内に無いと判定されたオブジェクトをAGV60によって物品移動装置100により近い位置に移動させる。AGV60によるオブジェクトの移動は、ユーザが手動による遠隔操作でAGV60を操作して行ってもよいし、あるいは、物品移動装置100の入出力部176を介して物品移動装置100とAGV60とを相互に通信させた状態で物品移動装置100のプロセッサ155によりAGV60を移動制御して行ってもよい。 In this modified example, if it is determined that the specified object is not within the accessible range of the item moving device 100, the object determined not to be within the accessible range is moved by the AGV 60 to a position closer to the item moving device 100. The object may be moved by the AGV 60 by a user manually operating the AGV 60 by remote control, or the movement of the AGV 60 may be controlled by the processor 155 of the item moving device 100 while the item moving device 100 and the AGV 60 are communicating with each other via the input/output unit 176 of the item moving device 100.
(第1の変形例)
 図10は、本実施形態における物品移動装置の第1の変形例を示す図である。図10(a)に示すように、本変形例の物品移動装置100は、図3等を参照して説明した物品移動装置100が備える構成に加えて、オブジェクトを移動させる先の位置を指示する指示表示を投影する指示表示部180が保持部130に設けられている。指示表示部180は、物品移動装置100が置かれた床面等の上に指示表示の画像を投影する光学プロジェクタや、レーザー光を走査させながら放射して所望の図形や文字を描画することが可能なレーザー放射装置等によって構成することができる。指示表示部180によって提示される指示表示は、たとえば、十字形、円形、矢印形等の任意形状の図形であってもよい。図10に示す例では、指示表示部180は十字形の指示表示を投影するように構成されている。
(First Modification)
FIG. 10 is a diagram showing a first modified example of the article moving device in this embodiment. As shown in FIG. 10(a), in addition to the configuration of the article moving device 100 described with reference to FIG. 3 and the like, the article moving device 100 of this modified example has an instruction display unit 180 that projects an instruction display indicating a position to which an object is to be moved, provided on the holding unit 130. The instruction display unit 180 can be configured by an optical projector that projects an image of an instruction display onto a floor surface or the like on which the article moving device 100 is placed, a laser emitting device that can radiate a laser light while scanning it to draw a desired figure or character, or the like. The instruction display presented by the instruction display unit 180 may be, for example, a figure of any shape such as a cross, a circle, or an arrow. In the example shown in FIG. 10, the instruction display unit 180 is configured to project a cross-shaped instruction display.
 このような指示表示部180を備えた本変形例の物品移動装置100によれば、パレット、カート、ベルトコンベア等のオブジェクトを配置すべき場所に指示表示部180によって指示表示が投影される。AGV60によるオブジェクトの移動は、一例として、ユーザが手動による遠隔操作でAGV60を操作して、床面等に投影されている指示表示に対してAGV60を位置合わせするようにオブジェクトを移動させることで行うことができる。あるいは、AGV60が撮像カメラ及び画像認識装置を備えている場合には、床面等に投影されている指示表示をAGV60が認識して、その指示表示の位置に自身の位置を合わせるようにAGV60が自律走行することで行うことも可能である。または、指示表示部180が投影する指示表示を作業者が目視して、オブジェクトを指示表示に位置合わせするように作業者自身がオブジェクトを移動させてもよい。 In the modified article moving device 100 equipped with such an instruction display unit 180, an instruction display is projected by the instruction display unit 180 at a location where an object such as a pallet, cart, or belt conveyor should be placed. As an example, the object can be moved by the AGV 60 by a user manually operating the AGV 60 by remote control, and moving the object so as to align the AGV 60 with the instruction display projected on the floor surface or the like. Alternatively, if the AGV 60 is equipped with an imaging camera and an image recognition device, the AGV 60 can recognize the instruction display projected on the floor surface or the like, and autonomously travel so as to align itself with the position of the instruction display. Alternatively, an operator may visually check the instruction display projected by the instruction display unit 180, and move the object himself so as to align the object with the instruction display.
 図10(b)は、本変形例の物品移動装置100による指示表示の投影動作例を示している。本変形例の物品移動装置100によれば、最初に、あるオブジェクト(例えば、パレット)を配置すべき場所に指示表示部180で指示表示mk1を投影し、AGV60等を用いてパレットをその指示表示の位置に配置する。続いて、ロボットアーム110を伸縮動作及び/又は回転動作させて、次のオブジェクト(例えば、カート)を配置すべき場所に指示表示部180で次の指示表示mk2を投影し、AGV60等を用いてカートをその指示表示の位置に配置する。このように、本変形例の物品移動装置100は、オブジェクトを配置すべき場所に指示表示部180によって指示表示を表示する動作を繰り返し実行する。なお、指示表示部180によって指示表示を投影する位置は、例えば、制御装置150が現実世界の物品移動装置100の周囲環境を再現する仮想世界(シミュレーション空間)内において、ユーザが制御装置150の入出力部176を用いて指定することが可能である。 10B shows an example of the projection operation of the instruction display by the article moving device 100 of this modified example. According to the article moving device 100 of this modified example, first, the instruction display unit 180 projects an instruction display mk1 at a location where an object (e.g., a pallet) is to be placed, and the pallet is placed at the position of the instruction display using the AGV 60 or the like. Next, the robot arm 110 is extended and/or rotated, and the instruction display unit 180 projects the next instruction display mk2 at a location where the next object (e.g., a cart) is to be placed, and the cart is placed at the position of the instruction display using the AGV 60 or the like. In this way, the article moving device 100 of this modified example repeatedly executes the operation of displaying an instruction display by the instruction display unit 180 at a location where an object is to be placed. Note that the position where the instruction display is projected by the instruction display unit 180 can be specified by the user using the input/output unit 176 of the control device 150, for example, in a virtual world (simulation space) in which the control device 150 reproduces the surrounding environment of the article moving device 100 in the real world.
 このようにAGV60によってオブジェクトを物品移動装置100により近い位置に移動させた後、プロセッサ155は、図7を参照して説明した処理を最初のステップS11から再度実行する。これにより、移動されたオブジェクト(パレット20、カート30,ベルトコンベア40等)を含む周囲環境情報が改めて取得され(ステップS11)、仮想世界内に移動されたオブジェクトが再現され(ステップS12)、移動されたオブジェクトに対するアノテーション情報付与が行われ(ステップS13)、少なくとも移動されたオブジェクトについて、アクセス範囲に存在するかに関する判定が行われ(ステップS14)、少なくとも移動されたオブジェクトについての指定入力が行われる(ステップS15)。 After the AGV 60 has thus moved the object to a position closer to the item moving device 100, the processor 155 executes the process described with reference to FIG. 7 again, starting from the initial step S11. As a result, surrounding environment information including the moved object (pallet 20, cart 30, conveyor belt 40, etc.) is acquired again (step S11), the moved object is reproduced in the virtual world (step S12), annotation information is added to the moved object (step S13), a determination is made as to whether at least the moved object is within the accessible range (step S14), and a designation input is made for at least the moved object (step S15).
 そして、それに続く、指定されたオブジェクトが物品移動装置100のアクセス可能範囲内に存在するか否かに関する判定(ステップS16)の結果、アクセス可能範囲内に存在すると判定された場合には、物品10の移動動作が実行される(ステップS17)。指定されたオブジェクトが物品移動装置100のアクセス可能範囲内に無いと再度判定された場合には、指定されたオブジェクトがアクセス可能範囲内に存在すると判定されるまで、AGV60によるオブジェクトの再移動と、上記ステップS11~S16の処理が繰り返される。 Then, as a result of the subsequent determination as to whether the specified object is within the accessible range of the item moving device 100 (step S16), if it is determined that the specified object is within the accessible range, the item 10 is moved (step S17). If it is again determined that the specified object is not within the accessible range of the item moving device 100, the object is moved again by the AGV 60 and the above steps S11 to S16 are repeated until it is determined that the specified object is within the accessible range.
 このように、本変形例によれば、指定されたオブジェクトが物品移動装置100のアクセス可能範囲内に無いと判定された場合には、オブジェクトがアクセス可能範囲内に位置するまでAGV60による移動動作が繰り返され、最終的に物品の移動動作を実行することが可能になる。 In this way, according to this modified example, if it is determined that the specified object is not within the accessible range of the item moving device 100, the moving operation by the AGV 60 is repeated until the object is located within the accessible range, and finally it becomes possible to execute the item moving operation.
(第2の変形例)
 次に、図11を参照して、本実施形態における物品移動装置の第2の変形例を説明する。図11に示すように、第2の変形例では、第1の変形例の保持部130に設けられた指示表示部180に代えて、物品移動装置100のアーム部110を固定支持するベース部120に設置された指示表示ユニット200が設けられている。本例ではベース部120は載置台50の上に固定されている。
(Second Modification)
Next, a second modified example of the article moving device in this embodiment will be described with reference to Fig. 11. As shown in Fig. 11, in the second modified example, instead of the instruction display section 180 provided on the holding section 130 of the first modified example, an instruction display unit 200 is provided on the base section 120 that fixedly supports the arm section 110 of the article moving device 100. In this example, the base section 120 is fixed on the mounting table 50.
 指示表示ユニット200は、柱状のベース部120の周囲を囲むようにベース部120に固定された周囲固定部210と、周囲固定部210の外周を囲むようにして周囲固定部210の周方向に沿って回転可能に周囲固定部210に支持された周囲回転部215と、周囲回転部215を回転させる回転駆動源220とを有している。 The indication display unit 200 has a peripheral fixing part 210 fixed to the base part 120 so as to surround the periphery of the columnar base part 120, a peripheral rotating part 215 supported by the peripheral fixing part 210 so as to surround the outer periphery of the peripheral fixing part 210 and be rotatable along the circumferential direction of the peripheral fixing part 210, and a rotary drive source 220 that rotates the peripheral rotating part 215.
 周囲固定部210の外周面と周囲回転部215の内周面との間には複数のボールベアリング(不図示)が封入されており、周囲回転部215は、周囲固定部210の周方向へは回転可能である一方で、周囲固定部210の軸方向(図示z方向)へは移動しないように周囲固定部210に支持されている。また、周囲回転部215の外周面には歯車が形成されている。 Multiple ball bearings (not shown) are enclosed between the outer peripheral surface of the peripheral fixed part 210 and the inner peripheral surface of the peripheral rotating part 215, and the peripheral rotating part 215 is supported by the peripheral fixed part 210 so that it can rotate in the circumferential direction of the peripheral fixed part 210 but cannot move in the axial direction of the peripheral fixed part 210 (z direction shown). In addition, a gear is formed on the outer peripheral surface of the peripheral rotating part 215.
 回転駆動源220は、外周面に歯車が形成されたギアと、それを回転させるモータとを有している。回転駆動源220のギアの歯車は周囲回転部215の外周面に形成された歯車と噛み合い、回転駆動源220のギアを回転駆動させることにより周囲回転部215を回動させることができるようになっている。回転駆動源220のモータの回転方向を切り替えることにより、周囲回転部215の回動方向を切り替えることができる。 The rotational drive source 220 has a gear with a toothed wheel formed on its outer circumferential surface and a motor that rotates it. The toothed wheel of the gear of the rotational drive source 220 meshes with a toothed wheel formed on the outer circumferential surface of the peripheral rotation part 215, and the peripheral rotation part 215 can be rotated by driving the gear of the rotational drive source 220 to rotate. The rotational direction of the peripheral rotation part 215 can be switched by switching the rotational direction of the motor of the rotational drive source 220.
 指示表示ユニット200はさらに、周囲回転部215に設けられた第1の駆動部230と、根本側の端部が第1の駆動部230に支持されたロッド部240と、ロッド部240の先端側の端部に設けられた第2の駆動部250と、第2の駆動部250に支持された指示表示部260とを有している。 The indication display unit 200 further includes a first drive unit 230 provided on the circumferential rotating portion 215, a rod portion 240 whose base end is supported by the first drive unit 230, a second drive unit 250 provided on the tip end of the rod portion 240, and an indication display unit 260 supported by the second drive unit 250.
 第1の駆動部230は例えばサーボモータによって構成され、第1の駆動部230が支持しているロッド部240をその根本側を回転中心として回動させることができる。一例として、ロッド部240を図11に示すように先端が下向きになっている状態から、ロッド部240の先端が上に持ち上げられる方向に回動させることができる。また、第2の駆動部250も例えばサーボモータによって構成され、指示表示部260を回動させることができる。指示表示部260も、物品移動装置100が置かれた床面等の上に指示表示の画像を投影する光学プロジェクタや、レーザー光を走査させながら放射して所望の図形や文字を描画することが可能なレーザー放射装置等によって構成することができる。 The first drive unit 230 is, for example, a servo motor, and can rotate the rod unit 240 supported by the first drive unit 230 around its base side as the center of rotation. As an example, the rod unit 240 can be rotated in a direction in which the tip of the rod unit 240 is lifted upward from a state in which the tip of the rod unit 240 faces downward as shown in FIG. 11. The second drive unit 250 is also, for example, a servo motor, and can rotate the instruction display unit 260. The instruction display unit 260 can also be configured by an optical projector that projects an image of an instruction display onto the floor surface on which the item moving device 100 is placed, or a laser emission device that can emit laser light while scanning it to draw desired figures or characters.
 このように、本変形例の指示表示ユニット200は、周囲回転部215の回動(第1の自由度)、第1の駆動部230による回動(第2の自由度)及び第2の駆動部250による回動(第3の自由度)の3自由度により、ベース部120に対する指示表示部260の位置及び向きを変えることができるようになっている。そのため、本変形例の指示表示ユニット200によれば、各駆動部を駆動させて指示表示部260の位置及び向きを適宜変更することにより、物品移動装置100の周囲のパレット、カート、ベルトコンベア等のオブジェクトを配置すべき場所に指示表示部260から指示表示を投影することが可能である。本変形例の指示表示ユニット200は物品移動装置100のアーム部110とは独立しているので、第1の変形例のようにアーム部110先端の保持部130に指示表示部180を設けた構成とは異なり、アーム部110の動作とは独立して指示表示部260による指示表示の投影を行うことができる利点がある。 In this way, the instruction display unit 200 of this modified example is capable of changing the position and orientation of the instruction display unit 260 relative to the base unit 120 by three degrees of freedom: rotation of the circumferential rotation unit 215 (first degree of freedom), rotation by the first drive unit 230 (second degree of freedom), and rotation by the second drive unit 250 (third degree of freedom). Therefore, according to the instruction display unit 200 of this modified example, by driving each drive unit to appropriately change the position and orientation of the instruction display unit 260, it is possible to project an instruction display from the instruction display unit 260 to a location where an object such as a pallet, cart, or belt conveyor should be placed around the item moving device 100. Since the instruction display unit 200 of this modified example is independent of the arm unit 110 of the item moving device 100, unlike the configuration in which the instruction display unit 180 is provided on the holding unit 130 at the tip of the arm unit 110 as in the first modified example, there is an advantage that the instruction display unit 260 can project an instruction display independent of the operation of the arm unit 110.
 以上、発明の実施形態を通じて本発明を説明したが、上述の実施形態は、特許請求の範囲に係る発明を限定するものではない。また、本発明の実施形態の中で説明されている特徴を組み合わせた形態も本発明の技術的範囲に含まれ得る。さらに、上述の実施形態に、多様な変更または改良を加えることが可能であることも当業者に明らかである。

 
Although the present invention has been described above through the embodiments of the invention, the above-mentioned embodiments do not limit the invention according to the claims. In addition, a form that combines the features described in the embodiments of the present invention may also be included in the technical scope of the present invention. Furthermore, it is also clear to a person skilled in the art that various modifications or improvements can be made to the above-mentioned embodiments.

Claims (7)

  1.  物品を移動させる物品移動装置であって、
     前記物品を保持する保持部を備えたアーム部と、
     前記物品移動装置の周囲環境情報を取得する取得部と、
     前記保持部、前記アーム部及び前記取得部の動作を制御する制御部と、
    を備え、
     前記制御部は、
     前記取得部に、現実世界における前記物品移動装置の前記周囲環境情報を取得させることと、
     前記周囲環境情報に基づいて、前記現実世界における前記物品移動装置の周囲に存在するオブジェクトを含む仮想世界を生成することと、
     前記仮想世界において、前記オブジェクトの属性情報と位置、姿勢及び寸法に関する情報とを含むアノテーション情報を当該オブジェクトに付与することと、
     前記仮想世界に存在する前記オブジェクトに対する、移動させる物品を指定する第1の入力と、前記物品の移動先の位置又は領域を指定する第2の入力を受け付けることと、
     前記第1及び第2の入力に基づいて、前記現実世界において、前記指定された物品を前記指定された位置又は領域に移動させる動作を前記保持部及び前記アーム部に実行させることと、
    を実行するように構成されている、物品移動装置。
    An article moving device for moving an article, comprising:
    an arm portion including a holding portion for holding the item;
    An acquisition unit that acquires surrounding environment information of the article moving device;
    A control unit that controls operations of the holding unit, the arm unit, and the acquisition unit;
    Equipped with
    The control unit is
    causing the acquisition unit to acquire the surrounding environment information of the article moving device in the real world;
    generating a virtual world including objects present around the article moving device in the real world based on the surrounding environment information;
    assigning annotation information to the object in the virtual world, the annotation information including attribute information of the object and information regarding a position, an orientation, and a size of the object;
    receiving a first input for designating an item to be moved with respect to the object existing in the virtual world, and a second input for designating a position or area to which the item is to be moved;
    causing the holding unit and the arm unit to execute an operation of moving the specified item to the specified position or area in the real world based on the first and second inputs;
    The article moving device is configured to perform the following:
  2.  前記アノテーション情報を前記オブジェクトに付与することは、
     前記仮想世界において、位置、姿勢及び寸法を特定する対象の前記オブジェクトの属性情報の選択を受け付けることと、
     前記仮想世界において、前記オブジェクトの位置、姿勢及び寸法を特定するためのモデルを表示し、前記オブジェクトの輪郭に合わせて重ね合わされた前記モデルの位置、姿勢及び寸法に関する情報を取得することと、
     前記受け付けた前記属性情報と、前記取得した前記モデルの位置、姿勢及び寸法に関する情報とを前記アノテーション情報として前記オブジェクトに対して付与することと、
    を含む、請求項1に記載の物品移動装置。
    The step of assigning the annotation information to the object includes:
    receiving a selection of attribute information of the object for which a position, orientation, and size are to be determined in the virtual world;
    displaying a model for identifying a position, orientation and size of the object in the virtual world, and obtaining information regarding the position, orientation and size of the model superimposed on the contour of the object;
    assigning the received attribute information and the acquired information regarding the position, orientation, and dimensions of the model as the annotation information to the object;
    The article moving apparatus of claim 1 .
  3.  前記オブジェクトの輪郭に合わせて重ね合わされた前記モデルの位置、姿勢及び寸法に関する情報を取得することは、前記仮想世界において前記オブジェクトの輪郭に合わせて前記モデルの位置、姿勢及び寸法を変更する入力を受け付けることを含む、請求項2に記載の物品移動装置。 The object moving device of claim 2, wherein obtaining information regarding the position, orientation, and dimensions of the model superimposed to match the contour of the object includes receiving an input to change the position, orientation, and dimensions of the model to match the contour of the object in the virtual world.
  4.  前記オブジェクトの属性情報の選択を受け付けることは、
     前記オブジェクトの属性情報に関する選択肢を表示する画面を前記仮想世界内に表示することと、
     前記仮想世界において、前記オブジェクトの属性情報を選択する入力を受け付けることと、
    を含む、請求項2に記載の物品移動装置。
    Receiving a selection of attribute information of the object
    displaying a screen displaying options related to attribute information of the object within the virtual world;
    receiving an input for selecting attribute information of the object in the virtual world;
    The article moving apparatus of claim 2 .
  5.  前記制御部は、
     前記第1及び第2の入力において指定された前記オブジェクトが前記物品移動装置によるアクセス可能範囲内に存在するか否かを判定すること、
    をさらに実行するように構成されており、
     前記指定された物品を前記指定された位置又は領域に移動させる動作を前記保持部及び前記アーム部に実行させることは、前記指定された前記オブジェクトが前記アクセス可能範囲内に存在している場合に実行される、請求項1に記載の物品移動装置。
    The control unit is
    determining whether the object specified in the first and second inputs is within a range accessible by the item moving device;
    and further configured to:
    The item moving device of claim 1 , wherein the holding unit and the arm unit perform an operation of moving the specified item to the specified position or area when the specified object is present within the accessible range.
  6.  物品を移動させる物品移動装置の制御方法であって、
     前記物品移動装置は、
     前記物品を保持する保持部を備えたアーム部と、
     前記物品移動装置の周囲環境情報を取得する取得部と、
     前記保持部、前記アーム部及び前記取得部の動作を制御する制御部と、
    を備え、
     前記制御部によって実行される、
     前記取得部に、現実世界における前記物品移動装置の前記周囲環境情報を取得させるステップと、
     前記周囲環境情報に基づいて、前記現実世界における前記物品移動装置の周囲に存在するオブジェクトを含む仮想世界を生成するステップと、
     前記仮想世界において、前記オブジェクトの属性情報と位置、姿勢及び寸法に関する情報とを含むアノテーション情報を当該オブジェクトに付与するステップと、
     前記仮想世界に存在するオブジェクトに対する、移動させる物品を指定する第1の入力と、前記物品の移動先の位置又は領域を指定する第2の入力を受け付けるステップと、
     前記第1及び第2の入力に基づいて、前記現実世界において、前記指定された物品を前記指定された位置又は領域に移動させる動作を前記保持部及び前記アーム部に実行させるステップと、
    を含む、物品移動装置の制御方法。
    A method for controlling an article moving device that moves an article, comprising the steps of:
    The article moving device includes:
    an arm portion including a holding portion for holding the item;
    An acquisition unit that acquires surrounding environment information of the article moving device;
    A control unit that controls operations of the holding unit, the arm unit, and the acquisition unit;
    Equipped with
    Executed by the control unit,
    causing the acquisition unit to acquire information about the surrounding environment of the article moving device in the real world;
    generating a virtual world including objects present around the article moving device in the real world based on the surrounding environment information;
    adding annotation information to the object in the virtual world, the annotation information including attribute information of the object and information regarding a position, an orientation, and a size of the object;
    receiving a first input for designating an item to be moved with respect to an object existing in the virtual world, and a second input for designating a position or area to which the item is to be moved;
    causing the holding unit and the arm unit to execute an operation of moving the specified item to the specified position or area in the real world based on the first and second inputs;
    A method for controlling an article moving device, comprising:
  7.  プロセッサによって実行可能なコンピュータプログラムであって、請求項6に記載の方法を実施する命令を含む、コンピュータプログラム。
     

     
    A computer program executable by a processor, the computer program comprising instructions for performing the method according to claim 6.


PCT/JP2023/036275 2022-10-14 2023-10-04 Article moving device and control method for same WO2024080210A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2022165852 2022-10-14
JP2022-165852 2022-10-14
JP2023118706 2023-07-20
JP2023-118706 2023-07-20

Publications (1)

Publication Number Publication Date
WO2024080210A1 true WO2024080210A1 (en) 2024-04-18

Family

ID=90669201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/036275 WO2024080210A1 (en) 2022-10-14 2023-10-04 Article moving device and control method for same

Country Status (1)

Country Link
WO (1) WO2024080210A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019111588A (en) * 2017-12-20 2019-07-11 セイコーエプソン株式会社 Robot system, information processor, and program
US20200114515A1 (en) * 2018-10-12 2020-04-16 Toyota Research Institute, Inc. Systems and methods for conditional robotic teleoperation
WO2023182345A1 (en) * 2022-03-23 2023-09-28 株式会社東芝 Handling system, information processing system, information processing method, program, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019111588A (en) * 2017-12-20 2019-07-11 セイコーエプソン株式会社 Robot system, information processor, and program
US20200114515A1 (en) * 2018-10-12 2020-04-16 Toyota Research Institute, Inc. Systems and methods for conditional robotic teleoperation
WO2023182345A1 (en) * 2022-03-23 2023-09-28 株式会社東芝 Handling system, information processing system, information processing method, program, and storage medium

Similar Documents

Publication Publication Date Title
CN111776762B (en) Robotic system with automated package scanning and registration mechanism and method of operation thereof
US11383380B2 (en) Object pickup strategies for a robotic device
US10870204B2 (en) Robotic system control method and controller
US11905116B2 (en) Controller and control method for robot system
KR102325417B1 (en) A robotic system with packing mechanism
US11228751B1 (en) Generating an image-based identifier for a stretch wrapped loaded pallet based on images captured in association with application of stretch wrap to the loaded pallet
JP6805465B2 (en) Box positioning, separation, and picking using sensor-guided robots
KR101772367B1 (en) Combination of stereo and structured-light processing
JP6374993B2 (en) Control of multiple suction cups
JP6661208B1 (en) Control device and control method for robot system
US10102629B1 (en) Defining and/or applying a planar model for object detection and/or pose estimation
JP7299330B2 (en) Multi-camera image processing
JP7123885B2 (en) Handling device, control device and holding method
JP2021146452A (en) Handling device, control device, and control program
WO2024080210A1 (en) Article moving device and control method for same
KR102565444B1 (en) Method and apparatus for identifying object
JP7481205B2 (en) ROBOT SYSTEM, ROBOT CONTROL METHOD, INFORMATION PROCESSING APPARATUS, COMPUTER PROGRAM, LEARNING APPARATUS, AND METHOD FOR GENERATING TRAINED MODEL
US11407117B1 (en) Robot centered augmented reality system
WO2023073780A1 (en) Device for generating learning data, method for generating learning data, and machine learning device and machine learning method using learning data
WO2024054797A1 (en) Visual robotic task configuration system
WO2024115396A1 (en) Methods and control systems for controlling a robotic manipulator
JP2022132166A (en) Robot supporting system
JP2024082207A (en) Robot control system, robot control program
WO2023238105A1 (en) Virtual buttons for augmented reality light guided assembly system and calibration method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23877215

Country of ref document: EP

Kind code of ref document: A1